Video transcript is below:
Hello, this is Mitul Metha reporting from Retina World Congress. I'm an associate professor of vitreoretinal surgery at the Gavin Herbert Eye Institute at UC Irvine in California. I'm going to talk about how we can use AI today to help our patients, and I'm currently doing this for my patients right now. The first 2 things we're going to talk about are 2 different apps. The first is called Be My AI, and the second is called Seeing AI. Be My AI is an offshoot of an app called Be My Eyes, which used human volunteers to describe what people were looking at using the cloud and a video camera on their cell phones. Now, this AI app will write out written descriptions of what it is somebody is looking at. However, you have to use another app to help the patients who can't see that text know what it is, either text to speech or a magnifying of that app, by zooming in like you can on your phone.
The Microsoft Seeing AI app will describe the scene in front of you, audio and written, and it can also read out and count your currency in front of you. In the United States, it's a little bit tricky because dollar bills are all the same size, no matter the denomination. That's not true in all countries, but in the United States, it is something that does cause some people some issues who have low vision because they don't know the value of the bill that's being handed to them, and this app can actually count up the money for you, which is really nice.
The thing that the majority of my patients with low vision are using are these glasses right here. These are Eyedaptic Eye6 glasses. And with these glasses, there's a video camera right here in the middle and there's the display in front of each of the eyes.
And with these displays and this camera, using the software, it can identify what it is you're looking at, and if it's text, people have a comfortable text size that they're used to reading comfortably because when people read, they read a whole word at a time, not an individual letter, like the letters we check when we check visual acuity. So what this AI will do is it'll identify the text and it'll automatically zoom it to the size of text that is comfortable for them no matter where the text is. It'll also automatically focus on it if it's near or distant, and it'll help them see faces and text at a comfortable size for them given their visual problem.
The last thing we have is a visual assistant called Ivy, which is built into the glasses on the Eye6. And you can ask Ivy what it is you're looking at or how to find something in front of you, and she'll describe the entire scene in front of you and what the object is you're looking at in relation to the other objects in front of you. So using this combination of different AIs, your patients can be helped today. RP