On Tech & Vision Podcast

Innovation Process from Big Idea to Reality

This podcast is about big ideas on how technology is making life better for people with vision loss.

When we buy a product off the shelf, we rarely think about how much work went into getting it there. Between initial conception and going to market, life-changing technology requires a rigorous testing and development process. That is especially true when it comes to accessible technology for people who are blind or visually impaired.

For this episode, Dr. Cal spoke to Jay Cormier, the President and CEO of Eyedaptic, a company that specializes in vision-enhancement technology. Their flagship product, the EYE5, provides immense benefits to people with Age-Related Macular Degeneration, Diabetic Retinopathy, and other low-vision diseases. But this product didn’t arrive by magic. It took years of planning, testing, and internal development to bring this technology to market.

This episode also features JR Rizzo, who is a professor and researcher of medicine and engineering at NYU — and a medical doctor. JR and his research team are developing a wearable “backpack” navigation system that uses sophisticated camera, computer, and sensor technology. JR discussed both the practical and technological challenges of creating such a sophisticated project, along with the importance of beta testing and feedback.

Podcast Transcription

Roberts: In 1961, President John F Kennedy appeared before Congress and made a historic announcement. 

Kennedy: I believe that this nation should commit itself to achieving the goal before this decade is out, of landing a man on the moon and returning him safely to the Earth. No single space project in this period will be more impressive to mankind, or more important for the long range exploration of space, and none will be so difficult or expensive to accomplish. 

Roberts: It was the big idea of all big ideas, and the United States government went all in. Less than ten years later, Apollo 11 blasted into space. On Sunday, July 20th, 1969, Neil Armstrong became the first person to set foot on the moon. 

Armstrong: It’s one small step for man, one giant leap for mankind.

Roberts: It was one of the greatest achievements in human history, but there was no guarantee that it would ever happen. It took countless hours of work by more than 400,000 engineers, scientists, and technicians, thousands of tests, including ten practice missions, and billions of dollars. It was a true team effort, and in the end, it led to a moment celebrated by the entire world.

I’m Doctor Cal Roberts and this is On Tech & Vision. Today’s big idea is, how big ideas become reality. We all know there are a lot of great ideas out there, but it’s only through genius and determination that some great ideas become products and technologies that change people’s lives and sometimes history. 

On this show we discuss many exciting concepts and cutting edge technologies, but what isn’t talked about as much is the painstaking process of going from a big idea to a finished product. 

Today, I talk to two guests who are successfully navigating this tricky process. The first is Jay Cormier, President and CEO of Eyedaptic, which manufacturers smart glasses to help enhance vision for people with age-related macular degeneration, diabetic retinopathy, and other retinal diseases. Their current version is called the Eye5, and it has gone through many iterations along the way. I asked Jay to tell me more about it. 

You were a great guest for us in Season One of On Tech & Vision and we appreciate it. But Season One seems like a million years ago!

Cormier:  I think it is.

Roberts: As we’re now on Season Four. And at that time you were talking about great technology that had the opportunity to really enhance the lives of people who are blind and visually impaired using augmented reality primarily in magnification in order to help people see things that they couldn’t otherwise. 

What we want to talk about today is the process of how you go from technological ideas to a product that’s ready for people to use. And let me just tell our listeners that when we spoke with Jay in 2020, he was about to release version 2 of his device called the Eye2 and now I believe we’re up to Eye5.

Cormier: That’s correct. We released the Eye5 actually almost a year ago now. And this goes back to, you know, our core belief that we always need to be improving to help our customers. 

Roberts: You said that Eye5 came out last year. Help us remember when did Eye1, come out, 2,3,4 and 5?

Cormier: Yeah, so Eye1 was really what I’d call a prototype device. We never brought the Eye1 to market. It had extensive user testing, but we felt there was some deficiencies based on user feedback. And so we always held that one back.

Eye2, like you said, I think that was early 2020, was really our pilot product. We also knew from our user testing that it was not quite what we wanted yet. So, we did sort of a limited release. Of course, by definition in 2020, it was the pandemic. Everything was a limited release. Especially in eye care that got so heavily impacted. But we certainly had enough feedback from our users to say, OK, we know what we need in the Eye3 and the Eye4. 

So it was 2021 I guess, which we followed up with the Eye3 and the Eye4. We actually released two products that year bracketing the market both with the value product and a premier product. And that’s when we felt we really hit the mark with our users. And again that’s based on their feedback, seeing how they’re using it, seeing how they’re incorporating it into their daily lives and quite frankly how it was changing their lives. 

Roberts: So that’s interesting, because rather than moving in a straight line of development, it sounds like, OK, we have great technology, but actually we’re going to go in two directions: a value direction and a premium direction rather than the straight line of just always doing premium, for example. So, how did you make that decision that you’re gonna go in two pathways? 

Cormier: Well, we made the decision based on user feedback that said, hey, you know, we love wireless and we want a self-contained unit, which is our premier product, but we also want something super lightweight and really comfortable to wear and looks just like a pair of eyeglasses, which is our value product. 

And when we looked at that, we said, you know there is some fundamental physics behind going wireless that goes in the opposite direction of making it lightweight, comfortable and easy to wear. And this is where I look at our users and go, you know, I can second guess them, but I’m no Steve Jobs and I don’t know better than our users, so the best thing to do is give them a choice and see what happens. 

Roberts: So, you’ve made reference so far a couple times to testing, and how important testing is with people who are blind and visually impaired acting as your beta testers. So, is it the same core of people who use all the time, or are you trying to rotate them? Is there a benefit of people who’ve had experience with previous technology or is it better to get people who are new to it? Explain that process.

Cormier:  Yes. So that’s a great question, and of course, the answer is both, right, is anytime we’re doing a new product, we will always go back to our past users and benchmark our new product versus our old products. So, that’s why it’s important to have some consistency in that beta user. But having said that, we’re always recruiting new beta users and let me make this an open invitation right now. If someone wants to be a beta user, give us a call because we always want to rotate in what I would call a fresh look at what we’re doing, fresh perspective. 

And certainly as we bring in new technology, we want to not only see how our present user base feels about that, but also a potential new user base. So there is a lot of, I would say, art to it as opposed to science in making sure not only do you have a big enough sample size of users, but that you have enough diversity of those users. 

From a user perspective, we target the central vision loss retinal diseases, so that’s mainly macular degeneration. We have a lot of Stargardt’s users, which essentially is juvenile macular degeneration. Just running a study now on diabetic retinopathy, wrapping that up, again another central vision loss. The other thing we target is not only those central vision loss diseases, but certain visual acuities. So our sweet spot is between 20/70 and 20/400. That’s not to say we don’t help people on the fringe of that, but for the most part our biggest success, the most efficacy we see is in that 20/70 to 20/400 range. 

They go to all different types of venues. We have grandmothers watching their granddaughters play lacrosse. We have one of our fellows going to the casino with these. Of course they use them in their home for TV watching and reading, like many of the other devices, but the sheer breadth of what Eyedaptic covers is, I think, one of our key strengths. So, we look for people that want to cover a wide range of use cases that have those visual acuities and those disease states. 

Roberts: When it comes to activities of daily living, there are certain activities that are universal. Everyone has to get dressed, get up, take care of themselves. But then people have unique needs, whether they are vocational or occupational or they are social. How do you separate them? How do you prioritize some versus others? 

Cormier: We want the diversity of the testing, so we try to recruit beta users that sometimes do what I would call very strange things with our product. Because that gets in that view that says, hey, there’s a different way to use this that maybe you don’t see all the time, but we always do go back to, you know, just basic sample theory and go, how many users are we sampling? What’s the majority of the users saying? Let’s use that as our North Star. And then try to work in these, let’s call it these more fringe features on a more gradual basis and make sure we have some testing over time as opposed to just a bunch of input at the beginning. 

Roberts: So I was listening to a podcast by Malcolm Gladwell and he was interviewing a bunch of social scientists and medical scientists who do clinical trials. And he said, imagine you had a magic wand and there were no barriers to what you could test. Either legal or ethical, medical, but you could test whatever you wanted to get the best information. What’s the perfect test for Eyedaptic?

Cormier: So I think the perfect test, and again it goes back to that adaptability, is you want the perfect test to be able to comprehend all the different person’s inputs, many different people’s inputs. So the way I always say it is the retina is like a thumbprint. Every retina is different, every vision impairment is different. How do you test the range of how all of those could interact with the device for that person? And so I’m not sure that one test can do that. So what I would love to do is have a infinite number of beta users test infinitely fast so I can get lots of good data to make our decisions more quickly. 

Roberts: What we’re doing here as Lighthouse Guild, and we hope that we could do this as quickly as we can, is to try to be, as you are, prescriptive of technology, on the basis of what’s the cause of the vision loss? What’s the degree of vision loss? And so that someone could follow a menu that says if you’re diabetic and your vision is 20/80, you should be using this. If you have macular degeneration, your vision is 20/400, you should probably be using this. If you have glaucoma and you have a field loss, you should probably be trying this. 

Cormier: Yes, we do try to be prescriptive as well. So when we go into a practice, and as I think you know, we work heavily with the eye care practices, we give them pretty clear guidelines on who to target and those guidelines are based not only on our clinical studies because we always do run clinical studies even though they’re not required, but also based on our user deployments and a lot of, you know, field experience. Having said that, just a quick note, we are going to be publishing our findings on diabetic retinopathy next month. So we are always doing this, but to your point, we’d love to be really prescriptive, but there is a certain amount of flexibility that needs to happen. You don’t know how this scotoma has developed, you don’t know the placement of that scotoma. And many times the eye care practice doesn’t even really have that data readily available. 

So, that’s why I think it all comes back to a system that’s adaptable to that person, their habits and environment, because I don’t think there’s any perfect prescription. I think you can create guidelines that are incredibly helpful, but in the end, the flexibility needs to be built into the device. 

Roberts: And therein lies the challenge. Everyone has different needs. There is no one-size-fits-all. And that’s why for these devices, adaptability is so important. Someone with a lot of expertise in that area is Doctor J. R. Rizzo, a professor and researcher of medicine and engineering at NYU and a medical doctor. He was born with a rare genetic condition called choroideremia, which restricts his peripheral vision. Throughout his life, J.R. has been interested in assistive technology for people who are blind or visually impaired. Among his many projects, he’s working on a device simply referred to as the Backpack, which uses sophisticated navigation technology to help the user navigate everyday spaces.

Rizzo: About 20 years ago when I was in medical school, I really started to think more about my difficulties with functional tasks, whether it was shopping, driving. I gave up my license at a very young age, only drove for a few years. There were a lot of unique aspects and challenges to my life and I’ve always been a bit of a biology junkie, I always excelled in sciences. You know, I was so struck by the fact that we have so many biological species that have no sight, yet they survive and survive incredibly well. You could take this star nose mole, for example, or you could talk about some of the bat species depending on size, have very, very poor vision, yet they’re able to navigate and fly and fish and do all sorts of amazing activities or what would be sort of functional tests for them. 

And so I started to think a little bit more about what we would call assistive technologies and what would be more approachable and discreet. And I was wearing a backpack every day and you know, I thought about trying to build in and integrating other sensory inputs that we may not have innatively in order to augment our existing capabilities. And so, you can think of using ultrasound sensors or additional camera sensors and to use these distance and ranging technologies to expand on your field of view. 

And then as the sort of machine vision world or computer vision world continued to evolve, could we develop different visual analytics to extract facial details and then feed that information back to the end user? Just like a bat may have a sonar system in order to use sound waves, distance and ranging and then to sort of benefit from understanding of the local environment, could we do something similar but bake that into some sort of a wearable. And a backpack seemed like a very approachable embodiment. 

So, we started to have these thoughts, started to think about different biological species and then started to get busy with licensing exams and I was also sort of navigating with some vision loss personally, and I drew some sketches, started to think about what sort of technologies we could use. And then I went into internship and residency and things sort of slowed down a little bit, but then when I got into residency and became a little bit more situated and comfortable with all of the clinical activities that were unfolding, I said, “let me think more deeply about what we had talked about.” And I started to have some great conversations with the technology transfer office and started to understand a little bit more about medical innovation and then that’s sort of where the rubber met the road and we really started to dig in and develop actual prototypes. 

So you know, people often talk about innovation and having sort of different aspects, whether you have this sort of desirability concept, which is really important. They also talk about whether or not it can be viable and they’re sort of the economics involved and then whether or not there’s a feasibility aspect to it. And that’s sort of more on the technical side. And so, we really had to think about all of those aspects, and it’s been exciting to talk about that. But, you know, in the desirability factor we had to do a lot of homework and spend more time talking to others with lived experience and benefiting from their critical feedback. 

And to say, earlier on, I sort of thought a 10 pound backpack was fine, and I was so surprised when we put a 10 pound backpack on some colleagues and some advocates in this space and them offering some fairly critical feedback that it was way too heavy and they would never wear it. And you know, here I am saying, well what if this backpack sort of gave you a third set of eyes or was able to expand your field of view or provided vital safety ,and it didn’t really matter. They were like, I mean, it’s a non starter. If it’s so uncomfortable, I’m not going to be able to wear it for more than 10 minutes. So my sort of understanding of weight and size was sort of radically transformed by spending more time. 

Roberts: J.R. touches on an extremely important point: no matter how big your idea or great your technology is, it has to be right for the user. In my conversation with Jay Cormier, we talked more about that challenge. 

Cormier: I think the way we’ve always looked at it is the right way, which is you put the user, the end user, front and center, and they’re really your guy, if you will. And we’ve always done that, even in the beginning, when we start development of a project, we do that in conjunction with our users. So up front, when we start a new technology development, or even a new feature, that’s very much based on user input. Then, what, of course, we do during that development process is make sure we incorporate our users into the testing of that requirement as well, because just because you listened and heard what they wanted, you still have to monitor and make sure you implemented it correctly.

Roberts: And so now you’re constantly developing the next thing and the next thing.

Cormier: Yes. Always. Now I think where we’re at now is where we’ve been trying to get for quite a few years, which is the hardware side of things is pretty well baked. The users really like the hardware we use. They love the functionality, the light weight, the comfort. So then the question becomes this, in my mind: how do you make that product ever more impactful on the vision enhancement, but also more impactful from a feature standpoint? But, and here’s the big but, making sure it’s not more complex and it’s still simple and easy to use. And that’s where I think Eyedaptic sets themselves apart. Because this is where our AI and adaptive algorithms start to play in. So we are able to adapt to the person’s vision, habits, or environment without making it more complex for them.

Roberts: So what it sounds like to me is that in their early development you’re spending a lot of time on the hardware, on the device, and as better components came out, you were able to make it smaller, lighter, et cetera. And now it sounds to me as if you’re pretty happy basically with the hardware and now it’s time to use software, as you say the AI, the other software features to now make it more robust and productive. 

Cormier: Absolutely. And that was a core belief five years ago when we started this. We felt that the hardware, like most technology hardware at some point, becomes pretty well baked and starts to become commoditized. And therefore, all the differentiation is in the feature set which is the software. So we anticipated that. In fact earlier this year we got our first AI patent granted. We filed that patent five years ago. Before AI was cool, right? However, the hardware did take longer than we expected. And yes, we’re very happy with the hardware now. We’re always going to take advantage of the best hardware out there. But we anticipated that the software, the algorithms, the feature set, that user interface, it would become the true differentiator. And I think that’s where we are today. We’re finally realizing that vision, if you will, and we’re well prepared for that. And that’s why we’re a leader on the market from a technology perspective.

Roberts: So, help me understand. On the innovation side, how much of this is developed in your labs versus how much you borrow? Because it seems to me that, particularly on the hardware side, that you’re waiting for others to come up with smaller, lighter, more powerful, faster chips. Then, once they’ve done that, now you can make your hardware better. You don’t have the resources to be yourself developing faster, smaller chips.

Cormier: And by the way, you don’t want to be, right. Because the people that are developing chips, and my background, as you know, is in semiconductors, are spending billions of dollars. There’s no way, I think, to win that battle. So the best thing to do is take advantage of it. Now to say that we wait for the right hardware is maybe a bit of an overstatement because we do work very closely with our vendors. There’s never been a vendor that we haven’t had to work with hand in hand to make some improvements for this market. So I think that tight coupling is also critical. But fundamentally, you’re right. We don’t want to be spending that kind of money. I think anyone that’s doing their own hardware is kind of setting themselves up for failure.

Roberts: Does that apply also to software? Is software developed mostly in house?

Cormier:  Yeah, absolutely. So I would break our software, I would think of it this way, Dr. Cal, is we have a modular software approach. What is the core of our software is what I call our VIP or video imaging processing core. This is where all our proprietary video processing image enhancement AI algorithms sit. That, if you will, is invisible to people, but it’s also what we protect tightly that’s done completely in house. And then around that core, think of it as a user interface wrapper. So the UI we don’t develop in house, but we specify in house and have a dedicated contractor and consulting relationship who’s also a technology advisor for us, who is an expert on that UI side of things on mobile applications, and then we work with hand in hand to implement that very tight coupling.

Roberts: So, getting back a little bit to the theme we were talking about earlier, as a new company, and when you think about other new companies coming out with innovations, great innovations like yours, do you find that they come out with new models faster early in the history of the company and then the rate of new releases becomes less frequent overtime, or is it just the opposite, that in the beginning you’re putting them out slowly, and then as you get bigger and more advanced, you find that new models are coming out faster and faster?

Cormier: So I think it’s a nonlinear curve, meaning that in the beginning while you’re learning faster, the goal is to iterate faster along with that learning. So certainly we were iterating faster at the front end of that company. But then what happens is you get to know that market, maybe you settle in to a certain extent. But this is where I think, and again this is what Eyedaptic’s striving for, is now we have those years of knowledge, we can now pick up the pace with new features and new products that are just as quickly coming out as when we were at the beginning of this company.

So I think it’s, you know, maybe it’s a normal distribution or whatever where you’re in the middle maybe slowing down a bit, but at the tails you’re going much faster. So obviously, until you have lots of user feedback, you never know for sure, right? Again, we try to sort of de-risk that by having lots of beta user feedback before release. But at some point, you have to release the product, and so you know, when is good enough, good enough? 

One of the things, actually, it almost worked in the opposite of what you described, is when we released our Eye5, you know we were pretty confident in the product. We learned a lot from the Eye3 and the Eye4 which had bracketed the market. But much to our surprise, that’s when the user demand in essence collapsed inward, meaning that users that were using that premier product said, oh I like the Eye5 better, it’s more simple to use and the Eye4 users said, oh I like the Eye5 better because it has better vision enhancing qualities. And, interestingly enough, the Eye5, which was kind of the product in the middle, drives most of the volume down and that was not something we expected. We expected it to be more broad than that, but that’s where you always get a surprise. That was, I would say, a happy surprise as opposed to a failure. But you know, we knew we needed that Eye5. We knew that mid-range product was important. We probably weren’t fully prepared to see how much it swamped out our other products.

Roberts: Now, some of your users, regrettably, over the years, have had further deterioration of their vision. How does that impact your development, how the same person experiences the technology through their journey with vision loss?

Cormier: It does encourage us to look at the continuum of vision loss or vision degradation, where some features are really much more useful for people earlier in their journey, but as they get later on, maybe you have to bring in a different feature set that is more useful to someone who is more visually afflicted. And that’s exactly what we’re working on now. That comes back to our core belief in adaptation. You don’t want to make that person go out and buy a new product because they vision deteriorate. You want the present product to adapt to that and become more useful to them.

Roberts: So you can see into the future that there is a menu of product offerings that tend to be adapted to what that person’s current vision is. And then from experience and from testing, you would know that people who have 20/80 vision benefit from this feature, those who have 20/400 benefit from this feature, things like that.

Cormier: Yeah, absolutely. That’s correct. And of course the challenge therefore is how do you serve that up, if you will, to those users? And the answer is you know, if someone’s earlier on in their journey, maybe they don’t want all those features, but can you offer someone some sort of upgradable software suite, and as they need those features, instead of trying to find those features somewhere else or in a different product, they can pay a modest subscription fee for instance and get their product upgraded. And of course that’s what we laid the groundwork for with the Eye5. What really happens behind the scene is that is data aware, that is remotely upgradable. So our infrastructure is in place to serve up new features as we go.

Roberts: So now, you add a new feature. And then when do you know that that feature is good enough? Because there’s always a question of I could wait another month or six months to make it better. Or what I really want to do is get something out that’s going to help somebody. 

Cormier: Yes. And certainly the magnitude of that feature makes a difference, meaning if it’s small and incremental change, then the amount of testing that goes into it may not be as much as if we’re putting out a new product or a brand new feature set. We, of course, are fortunate to be able to upgrade our product over the air. So anytime we have a new feature, that gets pushed out to our beta users and we do not introduce that feature into the market until we have sufficient feedback.

Roberts: And that’s the challenge, understanding, “when is something good enough?” As Jay says, there’s really no concrete answer to that. And for someone like J.R. Rizzo, who’s developing an extremely sophisticated device, there are so many moving parts to integrate.

Rizzo: We have an embedded system, sort of a mini computer in the backpack, right? We had to pick the right form factor. And then we have a set of feedback devices. So in order to communicate back to the end user we need to think about closing that loop. We need to think about both audio feedback, so using sound, and also touch feedback and using vibration or haptic touch perception, if you will. And so we started to think a little bit about how we could integrate that into a backpack.

And then as we got started, the cameras being sort of our foundational sensing element, we’ve expanded. So we have more sensing elements. We include an inertial measurement unit, which is probably out of scope of our conversation. We also have microphones integrated with our cameras and we also have a GNSS receiver which is what the cell phone uses for localization. So we have multiple sensing modalities now. So you can think of it as we have sort of input, we have processing, and we have output, right, sort of similar to like, you can think of your cell phone has input, it offers output and it has to have some sort of core processing capabilities as well. So that’s sort of the key aspects from more of a technical standpoint. 

And then we need to think about what sort of AI artificial intelligence we were using in order to extract those relevant spatial features that I mentioned before. And then in terms of the viability, this has been one that we’ve really had to think a lot about. It’s difficult in the accessibility space because usually the barrier to entry is very high. In the navigation space, which is where we’re spending most of our energy for sort of our initial service, we’re going to use this backpack to help support navigation and wayfinding, and then we’re going to build in other services. That’s how we’re evolving as an academic group. So it’s sort of our main focus is on navigation support, although we do a lot of other things these days and we’re pursuing that most actively in this commercialization push. 

But, we’re bringing in those other services as I mentioned and it’s been super exciting, really a lot of fun and most recently on that sort of economic aspect, again about that viability. We’ve been thinking about that, but also about how we pair what we’re doing with other industries. So instead of needing infrastructural assets like beacons, we require videos of the environment in order to build spatial understanding, and then that helps us support navigation and wayfinding. 

And so what we’ve been thinking about most recently is how can we partner with the real estate industry, with the inspection industry, with the appraisal industry and provide value through our backpacks? Because we can help automate how they’re acquiring video of their houses of interest, right, or their residences or their commercial spaces as they go in, they inspect and they’re looking, what electrical panels, what boilers? What utilities, right? Wouldn’t it be so cool if an inspector had on one of our backpacks with multiple cameras and we can pick up all of that information? They go back home, we automatically create a series of shorter video clips for them, with some editing that’s facilitated, and then we also can generate a basic report for them about what happened in that environment, right? 

Now, what’s really cool about that is we can provide value to that inspector, but at the same time, we’re getting the videotape of the space we need in order to provide accessible navigation. And so that’s what’s really cool about how we sort of thread the needle and close the loop on how we’re starting to solve for some of these sort of economic viability issues. Where we think some industries will help support other industries and sort of the ecosystem evolves and grow. 

Roberts: The point J.R. brings up about his device being useful for real estate purposes, which in turn is useful for his own product, is one of the main themes that comes up repeatedly on this podcast. Sometimes technology intended for one reason is also beneficial in a totally different, unexpected way. And that’s the beauty of the technologies our guests talked about today. The greatest products don’t happen in a vacuum, they happen in a feedback loop. It’s a continuous circle of ideation, feedback, and response. This episode is an excellent reminder that if you think you have the next big idea, don’t keep it to yourself. Tell people what you’re thinking. Bring them into the process. Working together, you could create something that will change the world. 

Did this episode spark ideas for you?  Let us know at podcasts@lighthouseguild.org.  And if you liked this episode please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts. 

I’m Dr. Cal Roberts.  On Tech & Vision is produced by Lighthouse Guild.  For more information visit www.lighthouseguild.org.  On Tech & Vision with Dr. Cal Roberts is produced at Lighthouse Guild by my colleagues Jaine Schmidt and Annemarie O’Hearn.  My thanks to Podfly for their production support.

Join our Mission

Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.