Guest Speakers

Lighthouse Guild Awards Lectures

Thursday, November 17, 2022

Presentations by

Haben Girma
2022 Dr. Alan R. Morse Lecture in Advocacy Recipient

Javier Pita Lozano
2022 Pisart Award in Technological Innovation Recipient

Sheila Nirenberg, PhD
2022 Bressler Prize in Vision Science Recipient

The Morse Lecture in Advocacy honors those who have demonstrated leadership, raised awareness of low vision, addressed barriers, and are working to make a world where no person is limited by their sensory capacity.

Noted human rights lawyer and disability justice advocate Haben Girma is the first recipient of The Dr. Alan R. Morse Lecture in Advocacy for People with Vision Impairment. The first Deafblind person to graduate from Harvard Law School, Haben Girma believes that disability is an opportunity for innovation, and she teaches organizations the importance of choosing inclusion. She speaks frequently about accessibility, diversity and leadership and is the author of the memoir Haben: The Deafblind Woman Who Conquered Harvard Law, which has been featured in The New York Times, Oprah Magazine and the TODAY Show.

The Pisart Award in Technological Innovation recognizes an individual, group of individuals or organization that have made significant contributions to the field of vision science. The award also recognizes those whose technological innovations improve the lives of people with vision loss.

Javier Pita Lozano, CEO of NaviLens is the recipient of the 2022 Pisart Award for his work with NaviLens. NaviLens is a technology that marries labelling information with orientation guidance to increase accessibility of public spaces and transportation infrastructure for people with vision impairment.

The Bressler Prize in Vision Science is awarded to a person whose research has resulted in translation of medical or scientific knowledge into significant advancements in the treatment or rehabilitation of people with vision loss.

Sheila Nirenberg, PhD, is the recipient of the 2022 Bressler Prize for her outstanding advances in vision science, including deciphering the retina’s neural code which is the key to restoring meaningful vision in people who are blind from retinal degeneration. Dr. Nirenberg is the Nanette Laitman Professor in Neurology and Neuroscience and a professor of Computational Neuroscience at the Institute for Computational Biomedicine, Weill Medical College of Cornell University.

Lecture Transcription

Dr. Roberts: … for those who work on issues related to vision impairment.  Developers. Entrepreneurs.  Loud Talkers. Academics and physicians. So welcome to everybody. 

Lighthouse Guild has a long-recognized excellence in vision research. And our awards and lectures have honored excellence in translating research into treatments and rehabilitation. The efforts of the award recipients directly contribute to Lighthouse Guild’s mission of providing services that inspire people who are visually impaired to attain their goals. 

As our mission has evolved, so have our recognitions. Now, not only do our awards and lecturers under excellence in translating research into treatment and rehabilitation, they include technological innovation and leadership and advocacy for people with vision loss.

So today we have three extraordinary pioneers in their fields that will present work that is changing the lives of people who are blind and visually impaired. The first presentation is by Haben Girma, recipient of the Doctor Alan Morse Lecture in Advocacy. This lecture is named for Doctor Alan R. Morse, Lighthouse Guild’s President Emeritus for his years of dedicated leadership of Lighthouse Guild. And his tireless advocacy for people who have vision loss.

The lecture was established this year, 2022, to acknowledge individuals who through leadership, raising awareness and addressing barriers are working to make a world where no person is limited by their sensory capacity. We are thrilled to have Haben Girma join us today. Miss Girma is a noted human rights lawyer and disability justice advocate. She’s also the first deafblind graduate from Harvard Law School and author. And a frequent speaker on accessibility, diversity, and leadership. It is my pleasure to introduce the 2022 Doctor Alan Morse Lecture in Advocacy recipient, Haben Girma.

Haben Girma:  Good afternoon. My name is Haben Girma. I’m honored to be here. It is such a rewarding experience to have my advocacy recognized, especially by Lighthouse Guild. When I was a high school student, I was trying to figure out how I would go to college. And what I would do with my life after college. I’m deafblind. And even today there are very few role models for deafblind people. 

Lighthouse Guild offered me a scholarship when I was in high school. And that helped me when I was trying to figure out how to get through college. And I have stories about college that I’m going to be sharing. But first I want to talk about communication. The deafblind community is very, very diverse. Lots of different ways of engaging with the world. Some people voice, some people sign. Some people use computer assisted communication. I’m doing a combination, so I’m voicing. But to know what is being said I’m reading from my real computer. And I’ll hold it up. 

I’m holding up a device that’s called Braille Note. There’s a Braille Note here as well as other braille displays that I was checking out earlier. So, as people speak, like when we were having the introductions, what’s being said is typed on the computer, on an external computer, and the letters pop up in braille. I feel it. So. During the introduction and during my speech, I’m getting feedback on what’s happening through my braille computer. The interpreters let me know when people are smiling, laughing, falling asleep. They’re watching you. 

This is just one of many different communication techniques we come up with. Deafblind people want to be connected with the world. And deafblindness doesn’t need to be the barrier. If we come up with solutions, whether it’s keyboards or other solutions, we can find ways to engage with our community. 

There are still so many people who think my life is limited because of deafblindness. When I was growing up, people were telling my parents or things she’ll never go to school. She’ll never get a job. My parents are from Eritrea, Ethiopia. When they came to the US they didn’t know about braille or sign language, or even Helen Keller. There were a lot of unknowns. 

We were in the San Francisco Bay area, which happened to be one of the hearts of the disability rights movement, and there were teachers to explain braille and sign language. Skiing. River rafting. I learned there’s an alternative technique for just about every strategy. There was a program to teach me surfing.  Another program to teach me how to use computers. Introduction to mathematics and sciences. And I attended mainstream public schools. 

So, I was taught early on that deafblindness didn’t have to be the barrier. I worked hard and graduated valedictorian from high school. I went to college at Lewis and Clark in Portland, OR. And lots of students around me were getting jobs in the summer. Summer jobs are we have to get a sense of what career you might want to do after college. No one wanted to hire a deafblind person. I really wanted to get a job. 

So I networked, talked with friends and I learned from a friend. He told me I know where you can get a job. Alaska. I really did want a job. So I went up to Alaska. And they were right. There were lots of job openings in Juneau, AK. Tourists come up to see the Eagles, the whales, the glaciers. Employers were impressed with my resume. With my volunteer experiences and my grades in school. So they called me in for an interview. But, as soon as they realized and disabled, they’ve come up with all kinds of excuses not to hire me.

These were tactile jobs, jobs that do not require sight. Washing dishes in restaurants. Folding laundry at hotels. When I was a kid, I told my parents blind people can’t do dishes. They didn’t believe me. So, I had lots of experience in this field. But employers still assumed that you have to be sighted to do dishes and they would not hire me. If I couldn’t even get a job as a dishwasher, what can I imagine for the rest of my life? How would I ever get a job after college? There were a lot of unknowns. 

Through the disability rights movement, I realized the problem here is not deafblindess. I absolutely could wash dishes and do laundry. My disability was not the problem. It was ableism that was the problem. A B L E I S M. Ableism is the system of practices and beliefs that treat disabled people as inferior to non disabled people. It’s the assumptions hiring management think. It’s the assumptions in schools that disabled kids can’t learn. And also it’s assumptions in our healthcare system that disabled people’s lives are not worth living. 

There was a study that showed that 80% of people in the medical field assumed that disabled people’s lives are worse than our lives actually are. That abelism. And it’s still deeply embedded that a lot of people don’t notice. And even disabled people, we internalize abelism because it’s so widespread. Once I started to realize that ableism exists and that was the real problem, that felt so liberating. I started to notch in more possibilities for myself. 

If someone didn’t want to be my friend, I realized ableism has a factor there. If someone didn’t want to hire me. I realized ableism caused them to think blind people can’t work in all kinds of fields where in fact we can work, and many of us have been working for many years. So I explained ableism. I also want to explain my use of the word disabled. 

Some people use other terms like special needs, differently abled. Euphemisms tiptoeing around the concept actually perpetuates discrimination. We can talk about other identities. We can say woman. Yeah, when we say disability. There’s still so much discomfort. The word isn’t the problem. It’s the shame and the discomfort that’s the problem and we can push past that shame. 

A lot of activists nowadays are reclaiming the word and using identity first language rather than person first language. Person first is more cumbersome and takes longer rather than just being identity. For example, it takes longer to say “person who is blind compared” to “blind person”. So some activists are reclaiming the word disabled and using identity first language, which is what I’m doing. 

So, back in Alaska, I could not get that dishwashing job. And thank goodness. I kept trying and eventually through friends I found a hiring manager who asked me how I would do the job. I explained my alternative techniques. She listened, and then she hired me to work the front desk of her small gym in Juneau, Alaska. That summer, I learned a lot about gym equipment. 

One day a woman came to the front  desk and said a treadmill isn’t working. I followed her to the front desk. I followed her to the treadmill. And I felt the machine from top to bottom. Near the bottom there was a switch. I flicked the witch and the machine worked right. She told me, Oh my goodness, I didn’t see that switch. I told her I didn’t see it either. 

Sometimes tactile techniques meet visual techniques. If you can’t read with your eyes, you can learn to read with your fingers. If you can’t see a button, you could find a button with your fingers. And we need employers and teachers and all of us in our communities, to recognize that alternative techniques are equal in value to mainstream techniques. Let’s be aware of ableism. And learn to resist ableism.

Since then, I’ve expanded my imagination and possibilities of what I could do. I still wasn’t sure what I would do after college. But I knew that there were people out there who would resist ableism and would actually listen to me when I explained how I would do it job. Lewis & Clark had a fantastic disability resource program. They provided me all my books in braille, so I had access to all my textbooks. The exams were in braille. When I arrived, they never had a braille reader before. But they embraced the opportunity to learn about braille and the reading specialist took time to learn how to convert print to braille, and they bought a braille embosser. They got braille translation software. 

So the day I started, she started producing braille so that I could have access to all my textbooks and course materials. There was just one problem. The college cafeteria menu was only in print. Sighted students could browse the menu and then go to their station of choice. As a blind vegetarian I need access to food information. It was really hard to know which station to go to when I didn’t have access to the menus. I went to the manager and I explained, I can’t read the print menu, but if you provide the menu in braille, or post it online or e-mail it to me, I have computers that allow me to read accessible emails and websites. The manager said we’re very busy. We have over 1000 students. We don’t have time to do special things for students with special needs. 

Just to be clear, aging is not a special need. There’s this myth that non disabled people are independent. And disabled people are dependent. It’s not true. We’re all interdependent. Many of you like drinking coffee. Very few of you grow your own coffee beans. You depend on other people to harvest the beans. And we need to be honest about the fact that every single one of us are interdependent. We all need each other. We all need community. 

The cafeteria manager didn’t seem to understand this and I was stumped with not having access to the menu. This went on for months. There were about six different stations. I’d go to one at random. Wait in line, get a plate, find a table, try the food. There was some unpleasant surprises. I told myself, just be grateful. Millions of people around the world struggle for food. Who was I to complain? My mother, which she was my age, was a refugee in Sudan. Who was I to complain? Maybe this was a lesson that’s disabled people should get used to inferior services.

I talked to friends. I did research online. And then I went back to the cafeteria manager and explained the Americans with Disabilities Act prohibits discrimination against students with disabilities. If you don’t provide access to the menu, I’m going to take legal action. I had no idea how to do that. I was 19. I couldn’t afford a lawyer. Now I know they’re nonprofit legal centers helping students with disabilities, but back that they didn’t know that. All I knew was I had to try. I had to do something. 

The next day, the manager apologized and promised to make the menu accessible. They started emailing the menus on time in accessible formats. So, I could read it on my real computer and if the e-mail said station 4, cheese tortellini, I could use my white cane and navigate all the way to station 4. Life became delicious. 

The next year, there was a new blind student at the college. He had immediate access to the menu. That taught me when I advocate, even if it’s seemingly small things like menus,  it helps other students. 

There are a lot of small barriers in our community that we quietly tolerated. Barriers impacting women, people of color, disabled people. When we take the time to address a small barrier, we build up the skills to master the larger obstacles. I wanted to build up my skills. And law school would be a way to build up those skills. 

I reached out to law schools. Harvard Law told me they’ve never had a deafblind student before. I told them I’d never been Harvard Law School before. We didn’t have the answers of how exactly I would take the classes and get access to the internships. But, we engaged in an interactive process to find the solutions. Try one thing, if it didn’t work, try another thing. That was not always the case for Harvard. 

Helen Keller was brilliant. She really wanted to go to Harvard. But they said no. Back then they only admitted men. Her disability didn’t hold her back. Her gender didn’t hold her back. It was the community that chose to exclude women. Over time, the community at Harvard changed and opened its doors to women, people of color, and disabled people. I was the first deafblind student at Harvard Law School. Because Harvard had finally changed enough and resisted ableism enough to make that possible. 

A lot of people ask me what is the hardest thing at Harvard Law School. The hardest thing was ableism. It’s still happening. I attended a training workshop during my first semester. And I was standing near the center of the room. My braille computer was on the table. The interpreter was on the other side of the table, typing descriptions of what was going on. And there were attorneys from the Boston area there to talk to students about job opportunities. 

I asked for one of the attorneys to come over. He came over. He would not talk to me. She only spoke to the interpreter. He told her, “Wow, what a beautiful dog. Does the dog go to class with her? That must be is smart dog?” And I spoke up and I said, I’m deafblind. I’m  reading everything you’re saying in braille. And the interpreter’s typing it. I know this can be a little confusing. Would you like to try typing? Maybe it’ll make sense if you try typing.

And again, she would not talk to me. He only spoke to the interpreter. “I’ve enjoyed watching you two. Tell her she’s very inspiring.” And he walked away. He was not inspired to offer me a job.

A lot of non disabled people, when they’re feeling nervous and uncomfortable around disabled people, they call us inspiring. And a lot of disabled people are uncomfortable with the term because it’s so often associated with that awkwardness, discomfort, and pity. I like the word inspiring if it’s used for action. If someone says I’m inspired to make my cafeteria menus accessible, or I’m inspired to increase hiring of disabled people, that is positive inspiration. 

Harvard is still struggling with ableism. Our whole society actually is still struggling with ableism. And that includes disabled people who who struggle with internalized ableism. Ableism is really one of the biggest factors impacting our access to education, healthcare and quality of life. And I’m going to keep working to help address ableism, educate people to notice it so more people can join me in the movement of fighting ableism and creating more opportunities for blind and disabled people. Thank you again for having me here and recognizing me for this award. Thank you everyone. 

<Picture taken of Dr. Roberts, Haben Girma and Dr. Morse>

Dr. Roberts: In 1981, the Pisart Award was established to recognize an individual, group of individuals or organization that has made significant contributions to the field of science. The award has evolved to recognize those whose technological innovations improve the lives of people with vision loss, now called the Pisart Award in Technological Innovation. 

Javier Pita Lozano is the recipient of the 2022 Pisart Award. He is the CEO of NaviLens, a solution whose objective is to increase the autonomy, social inclusion and quality of life of people who are visually impaired. The technology marries labeling information with orientation guidance to increase accessibility of public spaces and transportation infrastructure. NaviLens system of four color QR light codes is inexpensive, easily printed, simple to store information on ads thanks to special computer vision algorithms for smartphone cameras. Easy for a blind user to access with their phones. We have installed NaviLens codes all over the Lighthouse Guild and use them ourselves to great results. I am pleased to introduce the 2022 Pisart Award in Technological Innovation recipient Javier Pita Lozano. 

Javier Pita Lozano: Thank you so much. I cannot describe enough English words the happiness that I have. OK, you are hearing me. I have a strong accent from Europe. I’m from Spain. But I cannot deliver how happy I am to be here in the Lighthouse Guild and receiving this award. Hopefully you like the work that we are doing for the last 10 years now and that I’m going to introduce at this moment. 

(slide presentation)

In the world now almost 300 million people that are visually impaired. 39 million are blind and almost 250 million have low vision or are partially sighted. So, the challenge is that visually-impaired people are not completely independent in our [31:01].  Like, for example the first time one user visited this space.  One of the reasons is that the signage isn’t accessible for blind people. Fully sighted people use the sign in order to move across a space.

This happens in all kinds of facilities.  It happens in train stations, hotels, public buildings, malls, hospitals.  Everywhere.  So, our idea was, why don’t we put something on the signs and use the camera of the mobile phone to read it?  Our first approach was to use QR codes.  Did you know that the QR code was invented about 28 years ago?  Did you know that?  Me neither.  The problem with the QR code is that it’s not possible to read it from far away.  You need to focus and frame the QR code.

So, we decided to invest in a long process of research and development between our company and the University of Alicante to create a new kind of QR code that could be read by [32:10].  And after five years of research between 2012 and 2017 we created a new kind of QR code that can be read at 40 feet away without a need to focus or be in frame.

This is the description of the NaviLens code and I have here something tactile that describes how the code is for [32:40] and I’m going to give it out.  The code is an external white border, an internal border that is black, and some cells inside the code that are using colors instead of black and white as the QR code.  Doing that we can store more information and we can make the cell bigger in order to read from a further distance.

The unique value is that it’s possible to read a code at a very long distance, an incredible angle, and very, very quickly.  The first was the validation process.  It’s very important for our effort.  And we worked with ONCE. (Organización Nacional de Ciegos Españoles)  And we created this space that was accessible for the blind.  Imagine that.  A trade show is super difficult because there are a lot of booths.  And we made these, printing a code on a piece of paper, having at each one of the booths and the users were able to navigate in that space.

Indeed, the feedback was “this is amazing.”  “This must be everywhere.”  And we started working on several use cases.  We are working in a lot of different challenges.  For example, menu accessivity, like a fact – I think that we need to achieve.  But I would like to talk today about the transit use case.  Because mobility is key.

If someone navigates in Barcelona, this is a picture of Barcelona.  I wan to go from one position to locate a particular bus stop, the users are going to use a GPS app.  But the problem is that the GPS is not accurate enough.  And the user doesn’t know exactly where the bus stop is because GPS has a [34:56] especially here in New York with very high skyscrapers.

So, in Barcelona we started working on this, adding the NaviLens code on each one of the bus stops.  It’s a sticker that you can put on the bus stop.  And doing that, adding the sticker on the bus stop, we can solve the last new [35:21] problem. Once I get to locate that bust stop.  Indeed, if I read the code from here, I open the NaviLens app with my camera, I don’t need to focus exactly where it is.  And automatically I’m going to know where exactly it is and I can exactly know where the code is.  (NaviLens voice response)We don’t have enough time to get to Barcelona because this is real time information. 

So, doing this OK, adding a simple sticker to the bus stop, we can make that bus stop accessible for visually impaired users in order to locate and receive all the information about the next departures of that bus stop.  And one technology that is for everyone, so the fully sighted people can use that pointing to the code and having the real time information and very important, in 34 languages.

So, when you go go to Barcelona you don’t need to rethink n Spanish.  So, so you can read and access information in your own language.  And I’m super proud to be here because with the support of Lighthouse Guild and the MTA this technology is being implemented in this incredible city. This is our picture of the of the presentation of the NaviLens code at 23rd St in Manhattan. And look, adding the code to the bus stop makes that bust stop possible to look at by [37:20]. That is super difficult. OK, so think about this. It’s a [37:27] that is very thin and it’s very difficult to locate exactly where the bus stop. The thing is that technology that wants to create that more [37:38] needs to be easy to implement. Because, if not, we cannot solve the accessibility programs for the visual impaired. You see at the left the original version of the bus stop development initiate Manhattan –  by the way, congratulations for the people from the Department of Transportation for thinking about NaviLens 30 or 40 years ago and to put an empty space to add the NaviLens code there. OK. And the thing is that it’s possible to make those bus stops accessible as simple as having the sign notes in data. So it’s a simple solution. 

So, indeed there was reason that I have so clear with the sounds. It was developed here for this particular use case because it’s very challenging to detect, to locate and a bus stop image. But this is universal design. This technology pretends to be useful for everybody. Because how many of the people that is inside speaks German? Do you speak German? No, no one. OK, so you cannot understand this sign that is in German. So if you travel through Germany and you see this, you are not able to access the information. You are not blocked, but you cannot access to information. 

So, adding the NaviLens code already it’s that adding these, if you read the codes you will be able to access the information.  These designs, it was put at the beginning of the something like three years ago. Let’s take a look look. (NaviLens audio: Nine feet away, help stop the spread of the coronavirus one wash your hands regularly for at least 20 seconds.) These moments I want to speak very well. So, the thing is, it’s possible to speak German, French, Spanish, Japanese so the idea is universal design. The same technology that help blind users, you know them to read and access to one specific information, help everyone. 

Imagine this building how to solve accessibility in an indoor building, train station or subway? Because it’s very difficult. Please think about this when you navigate through a single subway station here. It’s super complicated. So our vision is to use the NaviLens code and to make the signage access. Adding the code to describe the sign and where the code.  [40:27]Test the system with twenty blind users and the result was incredible. And one result was very interesting. 94 of of the users stated that they had found things with NaviLens that they have not been able to locate before. They discovered the [40:43] for example, and when they navigates through the [40:50], the Lighthouse Guild in Barcelona, they discovered a sofa in one place that they never found that sofa so because [41:00] and it’s super important and 80% of the users were totally in favor of extending NaviLens to more locations, to more subway stations and the 12% declared themselves much in favor.

So it was a success. In barcelona, implement. In all the system wide – 161 subway station and 2600 bus stops. OK [41:25] implementation. So, if you go to Barcelona please remember OK this program and you will see that. And we have implemented this across the world. This has been found in the north of Spain. And NaviLens is implemented across the world.  Australia, Japan, Singapore, Europe in a lot of places near here in Newark, NJ, Philadelphia, in San Antonio they’re going to implemented across all the bus stops. Los Angeles and many other places. 

For example, there we have a picture of Paris. OK with the NaviLens code. Here is a picture of the London Underground with the NaviLens codes there. And I love this photo.  It’s a photo from from Kobe in Japan with an NaviLens code spot on the floor using tactic [42:19]. And they said yes. Maybe someone that think about things and push the legislation here in United States toward [42:29] train station that are super, super, super useful for for this. 

In New York in this time. In Brooklyn at the street. And MTA implementation these [42:43] with the mission of helping blind users to read the signage. Helping these New Yorkers to have access to next train departures in real time, check if services happens OK, and then navigate indoors. 

If I read this code you can see the information. The information that is in this signage but a blind user cannot. So if I read from this: (NaviLens audio: 12 feet away. Stair entrance to Jay Street, Metro tech station, access to mezzanine for R service, access to ACF service VR platform. Elevator to ACR and r mezzanine, and FW corner of J Street and Willoughby St.) 

To that, for example, I can locate where the [43:32] are. (NaviLens audio: 10 feet away elevator down to our platform, train arrivals on town line R, Forest Hill 71 F 2 minutes, Line R Forest Hill 71 F) and it’s connected with the real time services of MTA in order to know when the next trend is departing because they are not announcing and it’s very challenging. And when we have arrived to the platform. (NaviLens audio: Six feet away. Warning. Watch the platform gap. Manhattan and Queens bound out on the left side of the platform. Manhattan bound AC on the right side of the platform.) 

The thing is, is very important to achieve a more accessible word for blind users and in the same way that we see there’s exit signs. Everyone. Why not see the NaviLens code to make a more inclusive world for everybody? So thank you so much and I really, really appreciate to receive this award in the hands of the NaviLens team. Thank you. Thank you so much. 

Since 2003, the Bressler Prize annually recognizes an individual whose research has translated medical or scientific knowledge into significant advancements in the treatment of rehabilitation of people with vision loss. The 2022 Bressler Prize recipient is Doctor Sheila Nirenberg for outstanding advances in applying vision science to the treatment of blindness. Dr. Nirenberg is the Nanette Laitman Professor in Neurology and Neuroscience and a professor of computational neuroscience at the Institute for Computational Biomedicine, Weill Medical College of Cornell University. 

She is a recipient of the MacArthur Genius Award. She is also the founder of two biotechnology companies Bionic Sight LLC  and Nirenberg Neuroscience LLC. Her work in deciphering the retina’s neural code, developing a prosthetic to transmit this code, and exploring genetic interventions are key steps to restoring meaningful vision in people who are blind from outer retinal degeneration. We are grateful Doctor Nirenberg could share her life changing work with us today.  It is my pleasure to introduce the 2022 Bressler Award in Vision Science recipient Dr. Sheila Nirenberg. 

Dr. Sheila Nirenberg: Thank you so much for the award. It’s really a great honor and great honor to meet the other speakers, also. I hadn’t heard about either of them, so it was very, it’s very excited. So what I thought I’d do today is to give a relatively short talk, maybe 20 to 25 minutes, just to tell you the basic story of what we’re doing. And then, you know, we can open up to questions or discussion if you’d like. So, I know it’s a mixed audience of scientists and non scientists, so if I say anything that’s unclear, you know feel free to ask questions along the way. 

OK, so a tiny bit of background to add to what Cal was saying. So, I’m a neuroscientist and my research focuses on what’s called neural coding. So, what we work on is the general question of how the brain takes information from the outside world and encodes it in patterns of electrical pulses and then how it uses those patterns to allow you to do things, to see, to hear, to reach for an object. 

So, for most of my career, I was just doing basic research. I had never done anything clinically originated. And then about six or seven years I started to switch over and to take over learning about these patterns of pulses and apply it to medical problems and what we’ve been focusing on specifically as you all know, is the development of a new kind of treatment for blindness, blindness caused by retinal degenerative diseases. 

OK, so let me start with an outline of what I’m going to tell you about. So I’ll start by reminding you very briefly how a normal retina works in the context of this idea of neural coding. Then, I’ll run through what happens when a patient gets a retinal degenerative disease and I’ll tell you about how our treatment works, including showing you some animal data. And finally I’ll get you the the first clinical results, the results with our first several patients. So we’re still at the early stages, but I think we have a foot in the door and I really want to show. 

OK, so let me start in on on the retina. So here’s a retina and I’m not sure if you can see that pointer. Okay, you have a retina, you have an image, a retina and a brain. So, when you look at something like this image here, this image of this baby’s face, it goes into your eye and it lands on your input cells, the photoreceptors. 

Then what happens is the retinal circuitry, the part the middle starts to process it. What it does is it performs operations on it, it extracts information from it and it converts that information into a code. And the code is in the form of these these patterns of electrical pulses that get sent up to the brain. But the key thing is an image ultimately gets converted into this code. And, when I say code, I do literally mean code. So like this pattern of pulses actually means this baby’s face. So when the brain receives this pattern, it knows that what was out there was a baby’s face. If it got a different pattern, it would know that was out there was, you know, say a dog or another pattern would be a house. Anyway, you get the idea. 

And of course in real time, you know, I mean in real life it’s changing all the time, you know, it’s a dynamic process that these patterns and pulses are changing all the time because the world you’re looking at is changing all the time, too. So, it’s a pretty complicated thing. Patterns of pulses coming out of your eye every millisecond, telling your brain what it is you’re seeing.

So, what happens when a patient gets a retinal degenerative disease? As you know, the photoreceptors start you die. And overtime all the cells and the circuits that are connected to them, they start to die too, but the output cells, the ganglion cells, remain attached. So the ones that are sending the signals to the brain remain intact. But because of all this degeneration, they’re not sending any signals anymore. They’re not getting any inputs, so there aren’t very many or any signals to send. So, a person’s brain no longer gets any visual information. That means he or she is blind. 

So, a solution to the problem then would be to build a device that could mimic the actions of this rental circuitry. And drive these ganglion cells, these output cells so they can go back to their normal job of sending signals to the brain. So this is what we’ve been working on and what our what our system does. So let me just show you it schematically. It consists of two parts, what we call an encoder and a transducer. So the encoder does just what I was saying, it mimics the actions of that of that retinal circuitry, that front end circuitry. That is, it takes an image and it converts it into the retina’s code and this is just gone through a mathematical transformation that I worked out. That is what we implemented on a chip. 

And then the transducer, as I was saying, sends the code on up to the brain. And the combination of these two things the encoder and transducer creates a system that can produce normal retinal output. So a completely blind retina, even one with no quarter receptors at all can now send out normal signals, signals the brain can understand. 

You can’t see this underneath, but I wanted. That this transducer, I wanted to tell you a little bit about it. It’s a specific one called an optogenetic transducer. And what it is, is it’s a light sensitive molecule that when it receives light it will send out a voltage pulse. OK, so we designed the encoder, so we’ll send out the code in the form of light pulses to activate the transducer. 

So the encoder activates the transducer and the transducer, because it’s in the ganglion cells will allow them to send out neural pulses that closely match what the normal retina does.  So the question is how do we know we got this right, right? How do we know we really have code and we can produce normal output? 

I’m just going to show you two basic experiments, take it from our preclinical data or animal data just to give you a feel for it. But I mean it’s been heavily vetted, it’s been published and I won of MacArthur award blah, blah, blah. Let me just show you because a picture is worth 1000 words, OK? So what I’m going to show you are three sets of firing patterns. The first one is from a normal retina. So what we did was we took movies of everyday things. We went to Central Park and we took movies of children playing and people walking and trees and park benches and we presented these these movies to to a normal retina. So these are the firing patterns from a normal retina, viewing these movies.

And you can see they’ve produced pretty complicated patterns.

Then we showed the same movies to a blind retina. In one case we treated it just with the transducer. That’s sort of the standard way people have tried to do things. And the other way we treated it with the whole treatment, a combination of the encoder and transducer. So let me show you the one with the transducer alone first. 

So, rental fire patterns from a blind retina treated with just the transducer. And you can see that with the transducer it will fire. It just doesn’t fire in normal firing patterns you get these sort of bursts, but mostly it’s kind of on the sparse side. What happens if we add the encoder? So, now you can see that the firing patterns very closely match a normal retina. And just to emphasize the point, this is from a completely blind retina that has no photoreceptors at all. And now? Because we’re doing this artificially, you know, we’re able to make that retina now fire normally. 

So, we have the code, it can produce normal responses. And I’m emphasizing this is not just a simulation, this is from the actual retina that has the optogenetic gene. We have the code. We can produce normal output. How important is this, you know, what’s the potential impact in the patient’s ability to see? So, I’m going to show just one bottom line experiment. It’s called reconstruction experiment. What we did was we took a moment in time from these recordings and we said, what was the patient seeing at that moment in time. Can we reconstruct what the retinal scheme just from these firing patterns?

And we did it for the transducer alone and for the combination. So, let me show you. I’ll show you the one with the transducer alone first. So here’s the image that’s produced if you just have the transducer. And you can see it’s pretty limited. And it makes sense that it is because you saw in the previous slide that the firing patterns were kind of sparse, so they’re missing a lot of of those voltage spikes, then they just don’t have that much information in them. So when you try to recreate the image or you try to for the brain, you try to create the image from that sparse firing, it’s just a, you know, crappy image. Here’s what it was. This is what the original image is. What happens if you add the retina’s neural code? And you can see it’s really different. Not only can you tell that it’s a baby’s face, but you can tell that it’s this baby. Which is a really challenging task. 

OK, so this was exciting. What it means is that if we add the retina’s neural code, we can break through the limitations of standard methods like this, maybe all the way up to being able to see faces and objects. So, that’s the basic concept and what I want to do is switch over and tell you about the clinical trial. And remember we’re still at the early stages. The talk is, it’s not like I’m all done and I got invited. You know, you get invited to give a talk whenever it happens and I’ll show you the best I’ve got.

So the clinical trial, it says clinical trial at the top, is the treatment starts with a gene therapy to deliver the transducer, the optogenetic gene into the patient’s eye. So, to get it into the ganglion cells and this is just a single injection that’s done in a doctor’s office, so nothing complicated. Second part is goggles which provide the encoder. So it looks something like this. OK, your goggles here. And then I just made it little green dots just to convey the idea that there’s something in in the eye to receive it. 

You know, as we started the trial for the first one, we’re just using one pair of goggles that we have mounted on a bench and so the patient’s look into it and then they press on a console below to answer the questions. And of course because of COVID, I announced of course we clean it every time between patients. Let’s move on to the results. 

So the first key question is does the transducer, this optogenetic gene get in and can we drive it? This is really important because while having the neural code is nice, it doesn’t have any therapeutic value if we can’t actually get it into the retina. So that was the first question. Is the optogenetic gene part working and can we drive and the answer is yes. 

So the first indication I put the exclamation point because that’s really how how we felt. When you’re doing engineering, doing the code, you have more control over what you’re doing. When you’re working with biology, you inject this thing in. Maybe the whole thing could have failed if there’s just no way to know. But I’m very happy to say that the gene got it. And the first indication that it did was that the patients, from their own observations, 6 to 8 weeks after they were injected, they started calling and texting us to say that they could sense more light coming into the treated eye. Because these are very blind patients, it was very noticeable to them that this was happening. 

So one patient was telling us that he could see the Hanukkah candles. So when he walked into his dining room and on the eighth day of Hanukkah he could see the Hanukkah candles. Another patient said that he could see his dog running in the snow. He couldn’t see the details of the dog, but he could follow the dog. You know, it’s a high contrast situation. A dark dog and light snow. 

Another one was trying to do martial arts. He could see his instructor’s white robe against the blue mat. So this was promising and then three months after the injection, patients are brought into the lab, to our vision testing lab to assess this formally in quantitatively.

So, it’s what it says at the top is “assessing light sensitivity.” And so these are the results from February and these are with the lower doses in patients with very advanced blindness. The reason it’s with the lower doses is as probably everyone knows, you know for many clinical trials you have to start with the lowest dose and if nothing bad happens you go to the next dose and then the next dose. So these are our first patients. So by definition they are the lower dose of patients. 

So what we’re assessing here is how much more sensitive to light. They said they were sensitive. Let’s see if it was really happening. And again the initial treatment is of course on the most, on the most blind patients.  But anyway, OK, so you can see that. So on the left in blue is baseline. Before treatment. And then something weird happened with slides, but this is after treatment in red. So blue is is before treatment, red is after.  And you can see in every case the patient was able to see more. Much more light sensitive.

Just to orient you, this is light intensity on the Y axis, and it’s on a log scale. So it’s spanning several orders of magnitude, so several factors of 10. From TV light at the top, outdoor daylight in the middle, and very bright tungsten filament in a light bulb at the bottom. So each circle here indicates a block of 10 trials. So this is not one trial, each one is 10. If it’s filled, it means the patient got it right 80% of the time. If it’s unfilled, less than 80%, 80% or more unfilled is less than 80%. 

So as I was saying they’re very advanced blindness so most of the patients couldn’t see anything but the very brightest light and even then not so reliably because they’re getting, these are unfilled circles. But after treatment it’s it’s so much more because it’s a log scale. This is you know three lines above. Just this one was 1000. More sensitive to light. And the encouraging thing is that it lasted. Some patients with got better overtime. Some patients just held steady. These two patients still have to come in for their follow-ups. 

So light sensitivity, what else can they do? What about secondary endpoints? So the FDA still wanted us to start on light sensitivity, but we we can add other endpoints, also. So we set up a hierarchy of tests. Starting with motion. Can you tell if something’s moving? The direction of motion, can you tell which way it’s going? Live action. So that’s the same thing as direction but we have a technician actually do it. So instead of it being a computer generated image, it’s a real action by human being because when you move your arm up and down. It looks different than what a computer generated image moving an arm up and down is. It’s got, you know, it’s got, it’s sort of an arc or sideways. It looks sort of like, I don’t know, an according or something. 

We want to see whether they could see things in their real lives. And then object detection is what you think. We did mostly fruits and vegetables and shapes. So, just to go through this quickly, so same color code and baseline before treatment is in blue, so this was motion, can’t see it but motion, direction of motion, live action and then object detection. And if the patient is just guessing and they can’t see anything, it’s going to be a 50%. So you can see the patients really, really can’t see this, 

And the ones where there’s nothing, it’s because we couldn’t even get through a block of 10. One of the patients, I was showing him a bar moving to the left or to the right. And, you know, he said like I, you know, he’s presses whatever he thinks he sees. He says Sheila, I can’t even tell if there’s a bar there at all. So we didn’t torture him and keep doing it because it’s hours and hours of testing and the patients are so amazingly helpful and they’re really partners in the trial. I don’t think of them as patients. I think of them as partners. 

So nobody can really do this, similar to the light sensitivity, but you can see after treatment it’s really different. Everyone was able to see motion, several of them could detect the direction of motion and most of them could do the live action, too. No one so far was able to see objects, but remember we’re still at the lower doses, so there’s a lot to look forward to. 

Now I wanted to raise another point. So these results were with patients with very advanced blinds, but what about patients who are in slightly earlier stage? We have two patients like that. They had vision worse than 2400, meaning they can’t see anything on the top line of an eye chart is 2200. But there’s some ability to count fingers. I mean some ability. If you stand really close to them a little bit, they could, they could see this. I’ll show you that in a second. And so the question was? How will this treatment work on them? Might they even benefit from the transducer alone? 

So let me tell you what I was thinking. When a patient’s really blind, they can’t produce any inherently coded output on their own. So they need the whole thing. They need the encoder and they need the transducer. But, if they have a little bit of residual circuitry, then maybe they could generate the neural code a little bit on their own. There’s some processing, and we use the transducer as a booster to let their own neural code get through. It was an idea, but I think it’s working, so let me share. 

This is just saying these are results without slightly earlier stage disease. So here we’re checking finger counting, detecting motion, detecting the direction of motion, and recognizing objects. So here’s the patient. This is percent correct and distance when we’re doing finger count. And she really, you know, she’s a 20% of finger count. She was slightly more able to see bigger count from far away because she had a tiny patch of her eyes get far enough away she could see the two fingers in it. If you’re too close, she couldn’t see it at all. After treatment, it all levels out. She can do it at any distance. Then down here. 

Here it’s motion. Direction. And objects. So she could see motion, but she couldn’t tell at all the direction of the motion and she couldn’t see objects at all. After treatment, she could see direction of motion and she could see objects. Let me show you the second one to the next patient. 

So, this is very similar results here. The patient is much better on finger counting. She could see directional motion but no objects. But now she can see them. And what was interesting is that finger counting approved and the patient gained the ability to recognize objects. So I mean seriously that when we flash it on and baseline they can tell that you flash something on, but there’s no ability to recognize it. Now they can tell you what it was, and I’ll come back just a little later, but they would call it out. They would say broccoli, banana. I mean it was pretty amazing and and you know I’m still paranoid of overstating the case, so I’m going to not put in everything because I don’t want to be a blowhard in any way. 

One other thing that happened was they could also detect color. We didn’t expect this really. So this is a small improvement. This patient could seem much more. And then recently we had a patient in from Poland last week so he was very significantly improved in color from nothing to to all three colors, particularly red. These results were consistent with the patient’s observations at home. So patient 109. this one called me on the 4th of July and she said my couch is red. She said I thought it was black this whole time. She said the sunlight was streaming on on it and the bright sunlight activated the optogenetic gene and suddenly she saw that her couch was red. She could also distinguish among her colored pills, red versus blue. And she could see the LEDs on her stove so that she could cook them. So it’s these are significant quality of life issues. 

The guy that was here last week he said a very similar story. He was in the shower and when he came out of the shower the bathroom light was landing on his towel and he’s like, my towels are red. So it was very similar kind of story. Of course when patients describe things that are very similar the probability that this is real goes way up so you’re not deluding yourself. 

I’m just about done. This is just to convey to you how it’s done. So, you know these are the patients with the lowest dose, the next dose, the next dose. Now we’ve just started the highest dose, so that patient can see his red towel was our first patient with the highest. OK. So let me just summarize then. Of the patients received the lower doses, all the patients who started with complete or near complete blindness can now see live even with the lowest dose. With some being able to see light at daylight or television levels. All these patients can now also detect motion, including three who can also detect the direction of motion, both with the computer generated images and with live action. 

For the patients who receive the higher doses, two of them were the third highest and one was the very highest. Two of the three patients tested have so far gained the ability to see objects and detect color, but I just mentioned there was a new patient. So, it’s actually three out of four. 

And now I just want to thank all the people who participated in this and helped. The team, the clinical team and my team. And thank you so much. 

Dr. Roberts: So Doctor Nirenberg is graciously agreed to answer some questions. Does anyone have a question they’d like to ask? Well, I’ll start. We see patients here with vision loss from a lot of different reasons. So, just kind of maybe just once again, what type of patient might benefit possibly from this type of treatment?

Dr. Nirenberg: OK. So this treatment, this treatment is for retinal degenerative diseases. So the FDA requires you to start with a particular disease so that it’s a homogeneous population so you’re statistics are easier to unwrap. So, we did with retinitis pigmentosa. RP. It’s not like standard gene therapy where you’re going in and fixing a gene, we’re just overriding it, we’re reactivating the retina. So there’s no reason that it wouldn’t work for, I mean again not for macular degeneration or for any retinal degenerative disease, choroideremia, Stargardt’s. As long as it’s safe, we will create the next clinical trial to go after these other populations too and see how broad a reach we can we can have. 

Paul: (inaudible)

Dr. Nirenberg:  The only thing I know is no one knows really with optogenetics, but we had a patient come back. He was our very first one injected right at the very start of COVID. And two years later his light sensitivity hasn’t changed, but he was our the lowest dose. He wasn’t able to do a lot of things. That’s what we’re hoping. We’re also hoping that we can go to the other eye and do and both eyes because I promised that I would do everything in my power for the patients who have already been injected, they’re clamoring to get their other eye injected with the higher dose because they donated themselves at the low dose. I feel like they should be first in line for the high dose for the other eye. So then we’ll get more information. 

Lisa: (inaudible)

Dr. Nirenberg: So, I don’t know the specifics of that disease to know whether this would work, but as long as the ganglion cells remain intact, it will be able to help. That’s why it’s not a good treatment for glaucoma, because that does affect the ganglion cells. So that’s sort of the basic thing to ask if, as long as the output cells are there and there’s no other  obfuscation that would block the signals from getting in. 

Dr. Roberts: Let me ask Lisa’s question in a different way.  This is for people who have lost vision, right? This is not for people who never had vision.

Dr. Nirenberg:  Right. It doesn’t mean that it isn’t for them. It’s that with people who have lost vision, their brain has been, you know, developed to be able to receive the information. They’re used to getting the retina’s code, these coded signals. Would it work on someone who’s born blind? Probably, but it probably would be – they might have to learn and get the hang of seeing. That would be so utterly fascinating to do that, and I would love to be able to help. 

Dr. Roberts: Your other question was when is this going to be more available?

Dr. Nirenberg:  Well, we have enough virus to do 20 more patients and then we have to go into a phase three, which is a little bit of a longer process. So, it’s still a few years, we’re working as fast as we can and I don’t know the answer, it’ll still be a few years.

Dr. Roberts: So this will be investigational for awhile and hopefully at some point in time there would be enough data that enough experience that you find the FDA [75.53].

Dr. Nirenberg: Right. And the fact that the side effects are very minimal – they’re basically there are no side effects that they wouldn’t give – the patient doesn’t experience anything, but the doctor can sometimes see that there’s a little bit of inflammation and they put them on Prednisone drops, just Prednisone drops. They give Prednisone prophylactically and then afterwards they get drops. But the patient themselves don’t feel anything

[inaudble question]

Dr. Nirenberg: Well we start the clinical trial, you mean the whole thing or the? Well for long time because I was doing- I’m a neuroscientist but I was not working and on the on the project per se. I was just trying to understand how neurons communicate and what codes they use. And so one tiny thing, it won’t take long. I promise. That when I cracked this code. And I realized if we can send this code up to the brain that maybe we could make people see, maybe we can also make robots see. 

So, I started a second company where I’m using the neural code as the input to computer vision. And we’ve used it for a million things. So you know, face recognition and mobility and detecting emotions and lots of other things and Intel, the chip company bought it actually. So because I wanted to be able to focus on the clinical trial, but I partner with them. But the one thing I did when I sold it is I got permission to still use computer vision technologies to help blind people because that way they don’t have to wait till this is finally commercially available. 

What about people who who are a little bit blind but could use these extra things like like Javier was saying, so that there may be some machine vision things we can do in between, but it’s based on the same principle. 

Dr. Roberts: This is a great example of someone who is doing basic science to figure out how nerves work, how brains work and then overtime it evolves into something that depends where it has great applicability. It’s the progress of the whole career.

Dr. Nirenberg: Thank you so much everyone. 

Dr. Roberts: Well, thank you. What a thought provoking afternoon. I want to thank Haben Girma, Javier Pita Lozano and Doctor Sheila Nirenberg for joining us today and sharing their innovative and life changing work that’s helping people who are visually impaired to attain their goals. Events like these do not happen without the behind the scenes efforts of our Lighthouse Guild Staff. I want to thank Fernando Garcia Pena, for her work that has made this day a success. I also wish to thank Michael Boyd and his team and the IT department for their efforts that allow us to have these presentations here at the tech center. Finally, I want to thank you all for attending and making this most impactful of a program. I welcome everyone to the café on the 4th floor for lunch. Thank you.

Join our Mission

Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.