Luke Harries
Medicine
University of Cambridge
MSc Computer Science
UCL
Using machine learning to make every image of the internet accessible to the visually impaired.
There are 32 million people with visual impairments. Many rely on a Screen Reader to browse the internet.
Screen Readers need
alt-tags to provide a caption for images. However, for the majority of photos on the internet they are
missing.
Imagine not being able to tell who's in photo or what the text says.
All user uploaded content is inaccessible.
Machine learning enables us to generate captions for every image on the internet.
We generate:
Regardless of device.
If you are interested in trying our product then please register your interest here.
We proud to announce that we will be shortly launching our pilot study. If you would like to be notified when open applications please register your interest here.
VisualCognition are honoured to win the Microsoft Prize at the University of Cambridge's Hackathon. We look forward to continuing building on the Microsoft Cogntive Services Platform to deliver VisualCognition.
With experience in Medicine, Engineering, and Machine Learning we are committed to making the internet more accessible.