On Tech & Vision Podcast

Improving Visual Impairment with Technology

On Tech and Vision Podcast with Dr. Cal Roberts

This episode’s big idea is about using augmented reality, machine learning, and soon fifth-generation (5G) connectivity to improve vision. The Eyedaptic Eye2 is software-driven smart glasses for people with vision impairment. Dr. Roberts speaks with Dr. Mitul Mehta and Jay Cormier about how Eyedaptic uses machine learning algorithms to guide augmentations that adapt to a person’s vision deficits, habits and environments to help a them see better. They also discuss how 5G connectivity is going to continue to enhance the user experience for Eyedaptic users.

Podcast Transcription

Newman

I was actually one of the original team responders with the federal government to the World Trade Center in 2001.  And response from the West coast to help you guys on the East coast.

Roberts

Two years ago, at the age of 53, Sam Newman retired from the United States Public Health Service.  This is around the same time that his vision was worsening from retinoschisis, a degeneration of the retina.

Newman

I was with them for almost 30 years.  So, I’ve been to almost every natural disaster.  Every major hurricane.

Roberts

Was COVID-19 one of the public health crises Sam’s team would have responded to?

Newman

Yes, it is.  And the teams that you see are actually teams that I was on.  Tony Fauci actually instructed us on pandemic.  I go back to H1N1 response, and I retired right in January, right before this pandemic hit.

Roberts

Sam may be retired as a first responder, but he’s still making a meaningful contribution by making products for first responders to use in the field.

Newman

For the last ten years, I’ve been working on devices for establishing airways to help ventilate a patient during anesthesia or EMS and also hospital emergent situations.  It’s kind of a CCTV device that’s fixed onto a blade that goes down into the trachea of a patient and allows to guide the product down into the patient’s lungs.

I was working all the way up until a year ago.  I’m still consulting on the side and working on the devices to help bring them to market.

Roberts

I find it fascinating that Sam Newman’s device uses cameras to help first responders to see just as many of our guests in this podcast series are figuring out how to use cameras to help people with visual impairment to see.

My guests today are Dr. Mitul Mehta, ophthalmologist and medical advisor, and Jay Cormier, the President and CEO of Eyedaptic.

Dr. Mehta, you’re on.  Tell us about yourself.

Mehta

I went to college at MIT, the Massachusetts Institute of Technology.  I was going to be an electrical engineer with a double major in finance so I could start my own company and change the world that way.  And I got a job at a startup called StoragePoint.com.  It was DropBox before DropBox, so it was way ahead of it’s time.  I found that that company, despite having amazing technology and a really good technical team, just couldn’t figure out how to sell anything.  It kind of soured me on the whole concept of engineering and starting companies and that whole field.  So I dropped all of my engineering classes, but I’d already finished most of my finance classes so I just went ahead and finished my degree in finance at MIT, which is basically applied mathematics.

I went to medical school, much to the happiness of my parents.  I went to Georgetown.  I have a graduate degree in physiology and biophysics which was really up my alley because it was working on the physiology of medicine primarily.  And from an engineering mindset, structure begets function and I’m still an engineer at heart and still keeping an eye on what was going on.  I found this really interesting project out of USC by Mark Humayun who is a retina specialist.  And he was developing the Argus II implant which is an artificial retinal prosthesis.

Roberts

It is the subject of our Episode 4!

Mehta

Fantastic.  So, I went to USC for medical school and on the first day of medical school Dr. Humayun gave us a lecture on innovation and how to approach medicine from the mindset of solving problems.  And immediately after the lecture, I go straight up to him and I say, “Hello Dr. Humayun my name is Mitul Mehta.  I went to college at MIT and Georgetown for graduate school.  I came to this medical school to work for you.”  So he hired me on the spot on the first day of medical school.  And so, I worked on his project.  To this day he’s one of my closest advisors.

That kind of brought me to the whole concept of being able to use technology to fix problems with the body.  And then I did my fellowship at the New York Eye and Ear Infirmary in Manhattan.  One of the things I found in New York was people trying to solve the problem of vision loss not medically but with technology.

I came out to UC Irvine where I joined the faculty here.  I gave this talk, and after I finished my talk there was a little networking cocktail hour.  During that time, all sorts of entrepreneurs were bringing me ideas of their companies and wanted my medical advice.  One of those people was Jay Cormier who was talking to me about what became Eyedaptic.  It was really interesting.  I liked his approach and the reason he got into the business.  He really wanted to help people.  I could tell that from talking to him.

Roberts

So, Jay, you’re an experienced entrepreneur and businessman.  Why eyesight?  Why does this fascinate you?

Cormier

Well, that’s a great question, Cal.  Certainly I spent a lot of time in technology.  Unlike Dr. Mehta, I did start and finish as an electrical engineer and then went into the technology field.  And had quite a bit of success and tried a little bit of partial retirement which turns out, I was pretty bad at.

But my Grandmother and Great-Grandmother had macular degeneration.  And I don’t think I fully realized it at that time how much of a problem that could be.  And they were very sharp, living on their own, completely independent.  It was really the AMD that drove them into assisted living.

So, as I was trying out my partial retirement, helping out some friends with augmented reality in a completely different situation, we started wondering, can augmented reality help people with vision problems like AMD?  Quite frankly, myself and my CTO, although both have strong technology backgrounds, did not understand the vision part of it.  So certainly the best day that I think I had in this company when we found Dr. Mehta and also Dr. Kim, our other co-founder who’s also a retina specialist.  I think that’s when things really started coming together.

Roberts

That’s great.  So, the actual idea for Eyedaptic – where did that come from?

Mehta

From my side of it, it came out of the concept of what Dr. Humayun was doing with his retinal prosthesis.  The prosthesis has a lot of technical issues to deal with because, for those people who understand engineering, trying to run a microchip underwater – which is basically what the Argus II implant is – it’s a microchip you place inside the eye in a liquid-filled chamber.  It really affects the circuitry and how electricity gets transmitted, and it doesn’t allow you to have as fine of detail as you need.  And, it really only helps people who have severe vision loss.  People who are only light perception or no light perception.  They can’t even see motion.  That’s not the majority of people.

The majority of people live in a world that’s much better vision than that.  I wanted to see a company that could help people who haven’t lost that much vision and are used to being very independent suddenly lose that independence.

Roberts

People, for example, like Sam Newman.

Newman

What I’ve had a real big challenge with is shopping.  I wasn’t able to read the labels anymore of what I was shopping for.  Moreover I couldn’t read the price tags.  That was really horrible.  And then, the biggest challenge for me, and I get the most anxiety out of, is literally checking out at a cashier’s point of sale, especially if you have ten people waiting in line behind you.  I can only imagine what their faces look like behind the mask because I know they’re not happy because I can’t really see or navigate on the point of sale terminals when they ask you, is this amount okay, yes or no?  Do I accept this?  Any of that.

Roberts

At Lighthouse Guild, we provide care for a lot of people like Sam.  People who were recently able to do everything for themselves who find their world suddenly limited.  I asked Dr. Mehta what Eyedaptic does for people with mild to moderate vision impairment.

Mehta

What Eyedaptic does for these people is it identifies what exactly their visual defect is and it adapts to that specific problem.  If that problem is something called contrast sensitivity, meaning the lack of ability to tell between two slightly different colors like an orange and a pink, or different shades of red or green or whatever.  This can supplement that because with image processing you can adjust the pixels such that you can identify things that a computer can tell are different but the human eye can’t necessarily tell are different.  You set it up for that individual patient or user and try to help them in that regard.

Roberts

Is it going to help me read?  Is it going to help me watch TV?  Is it going to help me identify my grandchildren?  Is it going to help me cook lunch?

Mehta

The device has a feature called Bright Text, for example, that finds the edges of anything.  It can be text or a picture, and kind of makes it pop a little more.  And that’s something that people with diseases like diabetic retinopathy or diseases that are vascular diseases of the eye, they kind of lose that early on.

The other thing it can do is it can magnify the center of the image and then decrease the magnification toward the side, which allows the user to focus on something particular.  Let’s say you’re walking in the airport and you’re trying to find your flight and you’re trying to read the screen with the flight times and gate information.  But you don’t want to bump into someone else with luggage.  So, what the device does is it focuses on the center, identifies the text that you’re trying to read, and you’ve set it up for the size of text that is easiest for you to read, and it magnifies it to that size of text in the middle, but the magnification tapers off toward the side so you can still see the whole view of what you’re looking at so you don’t bump into people and you can still walk around and be ambulatory at the same time.

Roberts

So, you’re on this development path.  You started in 2015.  And at some point there was that “A-Ha” moment.  Oh my gosh, this looks like it’s going to work.  When did that happen?

Cormier

The first one is when we were able to put our technology on one of our users – this is one of Dr. Mehta’s patients.  This fellow couldn’t read anymore and we were able to get him to read again.  Certainly, that was our first indication and first “A-Ha” moment that this technology can really do what we hoped it could do.

Mehta

For me the big “A-Ha” moment when I realized that we were actually on to something was the clinical trial.  We weren’t required to do a clinical trial because this is not an FDA approved product.  I’m a scientist.  I run clinical trials in my practice on a daily basis.  So we put together a clinical trial that would have you read whole words at a time to see about reading speed.  Not necessarily just the size of the letter.  In addition to reading speed, being able to do stuff that people actually do on a daily basis.

I created a grocery store in the office, complete with the overhead signs that people would read that were approximately 3 meters away from your face, 9 feet away from your face.  Cans on the shelf.  If you ever go to the grocery store, all the cans of beans look exactly the same, even though they’re different types of beans.  And then I was trying to pay my electrical bill, and I looked at the bill and I was like, this is so complicated.  This would be a good thing for me to test in the trial, too.

We had these three different things that people were looking at, and we timed how long it took them to do it using what they normally use – just their regular reading glasses.  Their bifocals.  As well as with our device.  What I was shocked by was when they were doing the study, even the people with relatively good vision couldn’t do something, like not at all.  They were not able to read the signs or find the amount they owed on the electric bill and be able to tell the cans of beans apart, let alone read the nutritional information on them.

That was a real “A-Ha” moment for me because the big joke is the way you check the vision in retinas, you turn the lights on or off.  If you can tell the lights are on, good.  But this is not real world.  They actually needed to function in this world.  And so, seeing that they were not able to do these pretty mundane, everyday tasks, with what we would consider pretty decent vision with their normal glasses on a day-to-day basis.  But then, with the Eyedaptic glasses, they were able to do all of them. That was really, really impactful to me personally.

Roberts

When we talked to Mark Humayun, and we talked to him about the Argus, he told us he adapted it from a cochlear implant, took existing technology and adapted it for ophthalmic use.  Is this technology that’s being used elsewhere and now being adapted for ophthalmic use?

Mehta

People have been using  magnifying glasses since 500 BC when the Chinese developed eyeglasses.  Instead of doing it optically, it’s doing it digitally like cameras do.  When you want to have a telephoto lens on a camera, you zoom in on that with your telephoto lens.  This is just taking it to the next level where you’re able to do it live while you’re moving around with augmented reality.

This is taking augmented reality headsets and applying our technology to those commercially available augmented reality headsets that people are using for different applications – industrial uses, video games.  Those people who played Pokemon Go, they’re used to using their cell phones to see the little monsters running around the world.  It takes that general concept but applies vision technology to that.

Roberts

What made you decide that this was the best design choice for people’s vision?

Cormier

I think that came around through one of two things.  One, as we worked with more and more users in the clinic, it became very clear that a lot of people with AMD are an older population and they don’t really want to have to deal with the technology behind what’s helping them see.  They just want to see better.  So, a lot of the technology developing really stems more from the user interface side of things.

As we say, we’re trying to give them simulated natural vision.  A natural viewing experience.  So, for the auto zoom we said, what if we can help this device think for them in some sense.  What’s going on behind the scenes is there’s algorithms running, analyzing what that camera’s looking at.  It’s also analyzing what the user is doing in particular situations.  And with a combination of those two, it’s taking actions that are visually enhancing for the user.

Roberts

Tell me more about how you came to this conclusion based on user testing.

Cormier

What really helped was a constant stream of patients from Dr. Mehta and our other co-founder, Dr. Kim, and we were able to prototype our technology almost in real time, learn from these users, see what works, see what doesn’t work, and constantly improve that in a very rapid pace.  Because we would see patients almost every week, and that’s how we decided what technology would make a bigger impact for them.

Roberts

For example, what didn’t work?

Cormier

Trying to map the retina and give a straightforward, scientific answer to what part of the vision is working well was very difficult because what we found is many of these users had already developed habits to see what works for them and trying to get them to change that was very difficult.

The second one is we talked about on this auto zoom is pressing a bunch of buttons and trying to control the technology to help them see better was not nearly as impactful as running algorithms that automatically did that for them and was almost thinking for them.  Which leads us to a lot of our machine learning algorithms.

Roberts

So, this technology has a brain.  It’s thinking about what it wants you to see.  How does that work?

Cormier

In essence, what these algorithms are doing is to become adaptive, not only to the person’s vision deficits, but also their habits and environments.  Certainly with modern machine learning and processing power that are now in these glasses, you can do edge machine learning quite practically and quite efficiently so you can make a learning, or what you said, a thinking machine, for the user.

Roberts

Explain to us what edge machine learning is.

Cormier

Edge machine learning is often done with huge data sets on massive server systems.  Obviously we can’t put that kind of computing yet on the face of a consumer, although certainly with 5G we’re going to be able to access that in real time in the future.

For now, all the edge machine learning is really sitting at the edge of the computer environment.  In a phone or AR glasses on someone’s face.  The edge machine learning Algorithms need to be much more streamlined, the data sets much more streamlined so you can do that computation in a remote environment away from a central server system.

Roberts

That’s a little bit about machine learning.  Can you tell us about augmented reality and how that’s used in the EYE2?

Mehta

What I see in the clinic on a daily basis, these are people who are “legally blind.”  But every time I see them they walk straight into the clinic, they know exactly where the chair is.  They sit in the chair.  And they know where I am and they know who they’re talking to.  They are able to function and so, if you put a virtual reality headset on them, they’ll lose that ability because their peripheral vision is fine.  Because they have such good peripheral vision, augmented reality allows them to still use that peripheral vision and still use actual reality – the real world around them.  What it does is, on top of that, it adds more information and augments what the actual world is showing.  And we have the choice to be able to manipulate that.  And we manipulate that using the technology in the algorithms in Eyedaptic.

Roberts

Here’s the big picture.  Machine learning algorithms guide augmentations.  By augmentations here, we mean changes to the visual field that the user experiences.  As the algorithms do this, they learn.  The algorithms get sharper.  More tuned to the users habits and better able to deliver the right information at exactly the right moment.

Where do you go from here?  What’s next for Eyedaptic?

Mehta

The nice thing about this world that we’re coming in is that augmented reality is moving at a breakneck speed and we can take advantage of the entire world making augmented reality work for different fields.  So, the EYE3 is going to have all the standard improvements you see on every technology along the way.  Faster processors, better battery life, better cameras, better displays.  I think the real big next breakthrough is going to be able to have this connectivity that we’re having.

Roberts

Tell me more about connectivity and this thing we’re all hearing about – 5G.  What is 5G connectivity going to do for the EYE2 or even the EYE3 user?

Mehta

A lot of people are hearing about 5G because the big cell phone companies are bringing out these 5G networks.  This fifth generation of wireless data transfer allows you to use something called cloud computing, which is being able to do these machine learning processes instead of at the edges on your cell phone, actually in these big data server farms but I can’t necessarily see very well because I have visual scotoma or loss the vision in a certain place.  To have that connectivity is really the next step.  It can learn from your previous activity that this is what I’m going to do and when I do this, I need this much magnification.  I need this much improvement in contrast.  I can identify what this is.  I know that this is hot.

Cormier

What it does enable us to do is transfer huge amounts of data very quickly.  The ability to transfer that huge amount of data really lays the foundation for our future machine learning algorithms.  So, now instead of just doing processing at the edge, which has to be very, let’s call it, skinny and efficient, we can do much larger processing machine learning offline.  Or, with 5G even take that to online and actually give those users, in essence, real time updates so the glasses function better and better.

Roberts

The outcome for patients is going to be what?

Cormier

The outcome for patients is going to be improved autonomous usage.  The more a user can do without having to operate manual controls – of course they always have those manual overrides to customize it for them.  but if we can tap into broader data to help them optimize, adapt and user it faster, easier, better, and quicker is really what’s going to give them that more natural feeling experience.

Roberts

Dr. Mehta, how about from your perspective.  What’s 5G going to do for users of this product?

Mehta

The way I think of 5G or the fifth generation of wireless data transfer, this allows these glasses to really be truly mobile.  You take them anywhere that you have a 5G network.  And it’s like you’re walking around with a giant server farm of really, really complicated computers that are very powerful, doing all the processing for you.  You’re able to do all the things you wanted to do that you would normally do with just a pair of glasses.

Roberts

When is the development that’s primed for 5G?

Cormier

In your phone you’ve got lots  of speech recognition today.  In comparison, when you think about speech versus video, speech is maybe up to 20kHz type of frequency.  It’s a pretty low frequency wave.  Not a lot of information in it, compared to a camera which has got visual information which is not multiple megahertz of information.  So there’s that much more information at that much higher bandwidth going through a camera and then obviously to the displays.

And so, if you want to process what’s in those images, you just need a much fatter pipe, as they say, and much less latency in that pipe.  And that’s what 5G will enable.

Roberts

So, what’s the future for Eyedaptic?

Mehta

The future of Eyedaptic is kind of divided into the near future and the upcoming future.  In the immediate to near future we have our next version called the EYE3.  This is a device that is going to be lighter, much more comfortable for people to wear.  It’s going to have a better video camera.  It’s going to have better displays.  So the quality is going to be better for the user.  When you have all these things it just makes a better experience, just like any technology kind of gets better.  The next version has all the bells and whistles, just cranked up a notch.

The real thing that’s very exciting to me is when we start working on cloud computing with this device on your head.  What cloud computing is, it allows your headset to communicate via the cloud or through the internet.  The 5G networks that are coming will allow us to connect to the internet much faster when you’re out in the world.  But you can still access all of the data computing in these large computer centers to be able to do very complicated calculations to allow real machine learning to happen.

Roberts

What’s a hard problem?  What’s a problem that you’d love to be able to tackle but the technology just isn’t there yet?

Cormier

One of the things we are looking at – I won’t say too much about it because we are in the process of writing a grant for it – is when we look at those eye conditions, and there are some where you need to do careful eye tracking to bring a better solution to the market, the integrating eye tracking is not readily available today.  If we could get good, accurate eye tracking in those AR glasses, that would enable even more technology advancements for the visually impaired.

Roberts

Who are the patients that would benefit from eye tracking?

Cormier

When I think about eye tracking, I think about what does that enable from a technology standpoint to help someone’s vision?  Certainly, if someone has a blind spot, like they do in AMD, and you want to do a precise calibration to their visual deficit, and you want to somehow calibrate your algorithms to make up for that deficit, if that person looks a millimeter or even a fraction of a millimeter one way or the other, that calibration gets thrown off.  You need to be able to close that loop.  Think of it as a controlled loop and you need feedback in that loop, and it has to be precise.  Without eye tracking you have no way of knowing where they’re looking at.  And precise eye tracking will enable much more precise algorithms to help their vision deficit.

Roberts

So that people who only have limited vision, what eye tracking will do is allow them to use it all and get the maximum benefit from the vision that they have.

Cormier

Right.  It’s all about optimizing that remaining vision.  If you can do a better job of understanding what that remaining vision is.  where it is and can calibrate to that, then that opens up to other new avenues.

Roberts

Jay, what’s exciting to you about the way that the medical wearables industry is growing right now?  Where do you see vision technology and Eyedaptics situated within that growth?

Cormier

So, when I think about what’s going on, let’s call it medical wearables, much of what is enabling people like Apple or Google or whoever it is to take advantage of the wearables they have and turn it into a “medical device” is the data they are able to gather.  Can be on an anonymized basis.  But a greater access to data is really the foundation for a better machine learning algorithm.  And then again, if you can close that loop in real time with 5G, then you can actually get that learning to people in a real time basis.  I think that’s not only fascinating from a technology standpoint, I think it will also be hugely impactful from a vision standpoint.

Roberts

You’ve developed this great technology and you figured out how to supplement people’s vision with augmented reality, and you’ve learned how to get even more information by using the power of 5G.  How could this help people who are fully sighted?

Mehta

Once again, a fantastic question, because fully sighted is currently what people can see right now.  But I’m and ophthalmologist, and ophthalmologists are not happy with what people have right now.  Because fully sighted doesn’t mean they can see everything.  They can just see what is normal.  So, the goal of any sort of vision technology company in the end should also be trying to help people who have “normal” vision be able to see things that they cannot currently see.  And we’re hoping eventually the technology will be there to be able for me to wear these glasses.

I can go to a Lakers game, because obviously I’m going to go to Lakers games, and I can see everything from the worst seat in the house.  I can see the sweat on LeBron James’ forehead from the last row of the Staples Center and be able to look at every single thing.  I can actually not only see what is happening on physical, visual light spectrum, there’s also the majority of the visual information out there is actually in non-visible light.  There’s infrared light. And there’s also ultraviolet light that we cannot see and colors we cannot see.

If we can tailor the vision such that you can see all of it, but only focus on the parts that you need to see and be able to mentally process that, you could actually look at trajectories, arcs, any sort of motion, and you can look at all these things that people haven’t really thought about seeing necessarily in the past because it just was not practical for people to see.

Roberts

So Dr. Mehta, are you happy that you went into ophthalmology?

Mehta

I love ophthalmology because ophthalmology is the most exciting field of medicine because ophthalmologists in general are very pro-technology and always trying to get better.  You tell a cataract surgeon that 20/20 distance vision is the ideal and they’ll laugh at you.  They’re like, we’re not just talking about distance vision.  The whole world is between myself and 20 feet away.  I want to be able to see all of it clearly.  And that’s why they have these multi-focal lenses and panoptix and toric lenses and all these different technologies working.  They’re always trying to get better and that’s just the ethos of the entire field of ophthalmology.  That’s why ophthalmology is the best field in the world.

Newman

I’m trying to learn Spanish so what I do is I put the TV in Spanish language, and then I put the captions in English.  that’s how I’m able to read and listen at the same time.  That’s what I do with Eyedaptic.

Roberts

Same Newman, the innovator who created medical devices for intubation also built his own CCTV system in his home.  Of course he did!  We would expect nothing less of this guy.

Newman

I like to say I’m a technology warrior.  So I like to find things and apply it in ways that people haven’t thought of.

Roberts

But Sam is also working closely on user testing the EYE 2 with Jay Cormier and Dr. Mehta.  And the EYE 2 is changing the way he is engaging with his stationery CCTV setup.

Newman

Honestly, I use this less and less now.  I just use Eyedaptic now.  Because I can read, I don’t tinker with the Eyedaptic.  I just set up my whole house with Ring videos and I was able to use Eyedaptic for installing all of those devices.  So that’s been really good.

Roberts

Where would we be without the tinkerers?  The EYE 2 by Eyedaptic is the product of such tinkering.  It’s the culmination of technologies as they exist now, and a bridge to a 5G enabled future.  A future in which the algorithms that guide the headset will be learning and improving in real time as users wear and train the device.

That’s this episode’s big idea, and something that all of our vision tech developers have on their radar.  The fact that these devices will learn and improve with data from all users is the game changer.

Did this episode spark ideas for you?  Let us know at podcast@lighthouseguild.org.  And if you like this episode, please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts.

I’m Dr. Cal Roberts.  On Tech & Vision is produced by Lighthouse Guild.  For more information visit www.lighthouseguild.org.  On Tech & Vision is produced at Lighthouse Guild by my colleagues, Kathleen Wirts, Jaine Schmidt and Annemarie O’Hearn.  My thanks to Podfly for their production support.