On Tech & Vision Podcast

Seeing with Sound: Using Audio to Activate the Brain’s Visual Cortex

On Tech and Vision Podcast with Dr. Cal Roberts

Every day, people who are blind or visually impaired use their hearing to compensate for vision loss. But when we lose our vision, can we access our visual cortex via other senses? We call this ability for the brain to change its activity “plasticity,” and brain plasticity is an area of active research. In this episode, we’ll explore how, through sensory substitution, audio feedback can, in some cases, stimulate a user’s visual cortex, allowing a user to — without sight — achieve something close to visual perception.

Podcast Transcription

Seeing with Sound: Using Audio to Activate the Brain’s Visual Cortex

Ladies and gentlemen, if you’re just tuning in we have breaking news that Mount Everest has just been summited.  We confirm reports today that on the 29th of May of this year 1953 New Zealand mountaineer Edmund Hillary and Sherpa Tenzing Norgay summited the world’s highest peak, Mount Everest.  Mount Everest, in the subrange of the Himalayas soars a whopping 29,035 feet, the tallest peak above sea level.  It’s long considered been the brass ring of achievement from many a world class mountaineer.  Again, to anyone just joining, Mount Everest has just been summited.  We confirm reports today…

Roberts: Hillary and Sherpa mountaineer Tenzing Norgay became the first climbers confirmed to summit Everest, sparking the imaginations of a century of adventurers who would follow.

Weihenmayer:  When there’s a lack of things that the sound bounces off of like on a summit, the sound vibrations just move out through space infinitely, and that’s a really beautiful, awe-inspiring sound.  There’s this feel of the snow under my feet and the ice.  Sometimes it’s smooth like a window.  And then there’s the sun in your face and the wind and the feeling of your own body moving in this beautiful way up the face.

Roberts:  Erik Weihenmayer is one of those adventurers.  He summited Everest in 2001, despite losing his vision due to retinoschisis as a teenager.

Weihenmayer:  I went blind just before my freshman year in high school, and I could see just enough if I could press my face right up against the television.  I could see maybe what was going on a little bit on the screen.  I’d feel the static electricity on my nose.  That’s how close I had to get.  But I couldn’t really see to take a step.  And that was really scary.  And so, I was led into school, as a freshman in high school, led from class to class.  Led to the bathroom.  Led to the cafeteria.  But I could deal with that fear of not being able to see.  The real challenge of disability is the fact that now that you’re disabled, you’re not able to play sports with your friends.  You’re not able to have a license.  You’re not able to run around in a normal way with your friends and have those connections with other human beings. 

I was really afraid that my life was going to  be on the sidelines.  A life led for nothing.  A life that was meaningless.  That was way scarier than blindness itself.  I probably would have never signed up to go rock climbing if I could have seen because I would have been filled up with organized sports and all the things that sighted kids do.  So, I think that, weirdly, going blind was what made me turn to rock climbing.

Roberts:  In addition to Everest, Erik has summited the tallest peaks on every continent.  On climbs, he relies on a team he trusts, a tested set of systems and the right technologies, and most importantly, his other senses, especially touch and hearing.

Weihenmayer:  Like learning how to climb frozen waterfalls.  A sighted person looks at the blue healthy ice versus the grey or the white ice that’s more rotting, getting decayed by the sun.  Or they’re looking for these little concave dishes – these little cracks in the ice where your tool can hammer right into.  Well, I couldn’t do that.  So I had to learn to ice climb in a different way by feeling my ice tool cross the face and lightly tapping the tool against the face and feeling for the vibration.  And feeling for the density of the ice.  And so, the sound that I was listening for was kind of a “thhkk,” not a “dong.”  That means hollow ice coming down on you, but a thunk.  I always describe it as hitting frozen peanut butter with a sledgehammer.  You know it’s a good, solid stick.

Roberts:  Today, Erik is a world-renowned climber and adventurer and founder of the organization No Barriers, which helps all people push past limitations to live their fullest possible lives.  He has also taken up kayaking.  In 2014, Erik kayaked 277 miles of extreme rapids along the Colorado River, through the Grand Canyon.

Weihenmayer:  The difference between kayaking and climbing is that when you’re on a river, you’re going at the river’s pace and the river will speed up through a rapids.  So, you’re going really fast.  There’s no brakes.  In climbing, you can stop and take a breath and kayaking you can’t.  The river is in charge.  You’re riding this energy source and it’s way more powerful than you and you’re not going to fully ever control it.

Roberts:  Though kayaking presents unique challenges, Erik still relies heavily on touch and sound to navigate the waterways.

Weihenmayer:  Learning to figure all that out with my ears and orient myself by the feel of my boat under my body connecting with the river and my guides yelling directions at me and then using my ears to kind of be able to navigate, and that echolocation of the rock walls all around you – that was insanely hard, I guess I would say.

Roberts:  Welcome to On Tech & Vision.  I’m Dr. Cal Roberts.  Erik’s incredible achievements in climbing and kayaking remind us that if we lose our sight we can rely on our other senses to provide the valuable, in Erik’s case life-saving, information we need to achieve our goals.  But beyond simply using our remaining senses, are humans able to further develop them?  And to what degree can we reprogram the part of our brain formerly devoted to sight to augment our other senses.  We call this ability for our brain to change its activity plasticity.  And brain plasticity is an area of active research.

We’ve interviewed a number of scientists who have developed systems to replace sight by enhancing other senses.  We discussed the smart cane, facial recognition programs, and the sonification of large astronomy data sets into audible sound.  And we featured the BrainPort technology in which a camera converts an image to a set of vibrations that a sensor on the tongue communicates to the brain.

Today’s big idea is almost analogous to the BrainPort with technology that converts images to sounds that users learn to retranslate using the brain’s cortices back into images.  We’re talking with Dr. Peter Meijer whose technology is called “The vOIce.”  Dr. Mayer worked for over 20 years at Philips Research, five years at NXP Semiconductors, and seven years at the medical startup Hemics.

Currently he works at the engineering company Demcon.  Welcome to On Tech & Vision.

Meijer:  Thank you very much, and thank you for the nice introduction.

Roberts:  So, motivation.  Inspiration.  How did you get into this kind of work?

Meijer:  It was in 1983.  I was studying physics at Delft University of Technology and this was all nice, but digital technology was not really part of the curriculum.  And I felt that it was going to be very important, so I wanted to learn more.  Basically I sat down for a few days brainstorming.  What kinds of thigs could I process digitally?  And how could it be useful and not be a repeat of things that already existed.

Somehow in those few days this idea came up.  If I try to convert a live camera into sound in such a way that a lot of the visual information, the pictorial information gets preserved, that would be potentially useful to blind people if they can do reverse mapping in the brain.  Go back from sound to image.  But it was also technically challenging, because in those days PCs were not nearly fast enough to do this all in real time.  So I had to design and build a special purpose computer for that reason and while that took a lot of effort, a lot of time, I think I spent about seven years on that before it finally worked.  Later on you do further verification by mapping the sounds generated back to images using spectral graphic mapping, and that also took a long time.  It took about 24 hours to get one picture reconstructed from the sound, but it was quite impressive to me to see that the whole idea finally worked.  I got a recognizable image from that.

Roberts:  We talk about converting one sense into another sense.  In this case we’re taking the ability to see and converting it into hearing.  But you could have chosen one of the other senses.  You could have taken sight and converted it into touch, or smell, or taste.  Why hearing?

Meijer:  Smell and taste would be difficult to really convey real time information via those senses, I think.  I do not know of any way to do that.  But yes, touch and hearing are kind of competitors in that view.  There was already a lot of work on going by Paul Bach-y-rita who focused entirely on using touch initially with matrix of vibrating dots on your back, and later the tongue display that you mentioned in the introduction – the BrainPort device from Wicab.

Roberts:  The BrainPort is actually a technology that Erik Weihenmayer has used before.  We asked him how it worked.

Weihenmayer:  It’s a video camera that is a digital video that goes into a computer that I wear in my pocket that translates that video image to a tactile image that I wear on a retainer in my mouth.  And on that plate are 400-500 little vibrating pixels.  It’s like these tiny little vibration dots that the camera then is making note of the contrast, and then that would project onto my tongue.  And the first thing that the tech did to me in this very controlled light environment was, it was a black tablecloth, I think they told me, and it was a white tennis ball.  So I had the camera on my head and I pointed it at the table.  She rolled this white tennis ball across.  It lit up perfectly.  I saw this ball rolling across my tongue, and I’m like, Holy Cow!  That is a tennis ball rolling towards me.  And I just naturally reached out and grabbed this tennis ball off the table.  And I was like, Holy Shit!  Maybe that’s nothing for a sighted person, but for me that was snatching an object out of space.  It was something I hadn’t done in 30 years.

Roberts:  Is it sight?

Weihenmayer:  It’s not sight, but it’s enabling my brain to think visually again.

Roberts:  Peter’s work with the vOICe is similar.  Before we continue my interview with Peter, let’s get a sense of what the vOICe sounds like.  Close your eyes and open your ears.  See if you can guess what this object is.  

That’s a pair of scissors open on a table.  And here are the scissors closed.  Scissors open.  Scissors closed.  You can hear that little dip in the sound when the scissors are open there, right?  But to me, it seems like users really have to study and practice for years in order to learn what each sound is conveying.

Back to my original question, then.  Why did Peter choose audio as his feedback mechanism for the vOICe?

Meijer:  I can get a higher resolution by mapping to sound than by using these electrodes on the tongue or the fingertip or somewhere else.  It’s a potentially very powerful type of mapping that really could preserve a lot of pictorial information that is very hard to preserve by any of the other senses.  Only the trouble with hearing is, its bandwidth is still limited.  So, you have to play a trick there, and that is to distribute your information in time, instead of trying to convey a whole image in one complex sound, you say, no, this is not going to work.  I have to do something about these different dimensions that I have in an image.  I have a horizontal position, a vertical position, and I have a brightness.  So, I picked for the vertical direction, I used pitch, frequency.  And for the horizontal position frequency and a bit of stereo panning to make the bit more intuitive.

And then, loudness represents the brightness.  And if you do that, you can do some mathematical analysis and show that you can get a resolution on the order of 30×30 pixels up to 60×60 using one second of sound.  Ideally, you would want to have everything in real time.  So one second is a ballpark figure for what is acceptable for getting fresh updates.  And it does mean that there are still limitations.  You won’t be able to play tennis with this approach, but for typical daily living things, it’s okay.

Roberts:  Walk us through the user experience.  What does the equipment look like and feel like and how does the user use them?

Meijer:  Typically, blind people start out with a smart phone.  Let’s say an Android smart phone.  They run the vOICe for Android app, which they can download from the Google Play store.  And they play around with it, and once they are convinced that it’s for real they may decide later on to go for Smart Glass because then they can use it hands-free.

Roberts:  Is it different training someone who had vision and lost it compared to someone who was congenitally blind?

Meijer:  That’s a very interesting question.  You can say people who are late blind, they have all this knowledge about how visual perspective works.  They have visually memories to rely on and to relate to the soundscapes of the vOICe.  That’s true.  On the other hand, people who are born blind, they have sort of recruited all of their visual between visual cortex for other purposes including audio processing.  So, they can typically perform auditory tasks better than sighted people and late blind people.  In that way they are at an advantage in learning to process these complex soundscapes from the vOICe.  They have more wetware dedicated to that already.

Roberts:  By wetware, Peter means the brain.

Meijer:  I’ve never been able to determine that one or another group of blind people are at an advantage.  They seem to perform comparably.

Roberts:    Is there a similar technology for people who are deaf to help people to hear?

Meijer:  Yes.  That is the work of David Eagleman.  He has a company named NeoSenory and they market a product which is a wristband that vibrates.  It has four vibrating motors.  It converts the incoming sound detected by a microphone into four corresponding vibrations.  Each in different frequency bands.  And deaf people use that to first of all detect that there is a sound.  They can even learn to distinguish the sound of a dog barking and other characteristic sounds in their environment.  That is somewhat analogous to what the vOICe does.  But then in the main of hearing, an intriguing question there is, if you would further develop that technology so that instead of four motors now you get 64 vibrating elements, you could map the sounds of the vOICe to such a device and have a tactile representation of the sounds that the vOICe is generating such that even dead/blind people would be able to perceive live images.

Roberts:  That’s fascinating.  And I love it.  Talk to me about partnerships.  Several of the scientists that we’ve spoken to talked about partnering either with some of the big data companies – the Googles, the Microsofts, the Apples of this world, or with other laboratories.  What’s your experience partnering?

Meijer:  Most of the partnering up to say a decade ago was with neuroscientists.  Thirty years ago all the neuroscientists thought that once you’re an adult your brain is rigid.  That has changed since the 90s.  I got in touch with a few pioneers in that area such as Alvaro Pascual-Leone of Harvard Medical School and Joe [Josef] Rauschecker of Georgetown University who are pioneers in studying neuroplasticity in the human brain.  That was a study that was done in 2007 at Harvard Medical School in which they applied transcranial magnetic stimulation to temporarily disrupt the processing in the visual cortex of the user of the vOICe and this interfered with her ability to decode the soundscapes of the vOICe.  This showed that apparently the visual cortex was doing visual things again, despite the fact that she was totally blind because she now got the same kind of visual information via hearing, via the vOICe.  This is one example of several cooperations that went on with neuroscientists and psychologists.

Business-wise I got in touch with a team in Russia, a kind of subsidiary of a group of transhumanists there.  They embarked on the issue of training blind people to use the vOICe.  Because training is the main uncertainty still today.  In 2018 there was an open contest for blind navigation and blind object identification.  There were two blind participants that had an Argus II retinal implant.  There was one user of the vOICe wearing smart glasses from this Russian team.  And to our own surprise, this user of the vOICe won the contest by a large margin.  That was fun and fulfilling.

Roberts:  That’s fantastic.  Let’s talk a little more about neuroplasticity.  Because, as you have often emphasized in your writing that the vOICe is a great technology to study neuroplasticity as a research tool as well as being an opportunity for people who are blind to navigate and see better or have the perception of sight.  When I think about how the visual system works, we know it starts with the eye and what the eye does is it changes light into an electrical signal, and the electrical signal goes into the brain and gets interpreted as sight.  Is that a reason why plasticity is such an opportunity?   Because at the end of the day it’s all about converting electrical signals into perception?

Meijer:  Well, you’re right.  There are certain predefined pathways that make that pathway from your eyes to the visual cortex.  And there are pathways from your ears to the auditory cortex, and from then further on to high level centers.  But they are somewhat separate.  There are indirect links for the parietal cortex which is the association cortex which lets you combine auditory percepts with visual percepts.  Also there exists to some extent anatomical pathways directly from auditory cortex to visual cortex.  But it’s all with limitations.

If you want to combine things, it’s not obvious that you can get sufficient bandwidth to go from the ears to the visual cortex.  So, you need sufficient plasticity and you need sufficient hard wiring already in place to make this all work.  And plasticity is not the panacea.  It doesn’t solve all problems.  You have limits to your plasticity which is a good thing, because if you had no limits of plasticity, then tomorrow you would have forgotten everything that you knew today.  So nature plays with that.  There has to be some plasticity in order to be able to learn new tricks, new skills, but it has to be not too much because everything evaporates the next day.

Therefore, it’s still an open question whether the available plasticity in humans is sufficient to master such a complex skill as interpreting vision through sound.

Roberts:  But I love these two concepts that you’re introducing.  The fact that there has to be the highway.  There has to be the route for these signals to get to the proper area in the brain.  And the second concept is that particular area of the brain now being able to change its function or change its ability to interpret.

Here at Lighthouse Guild, we still believe there is an important role for Braille for people who are blind.  And we still encourage Braille teaching.  And I’m always so impressed by the facility that certain people achieve with Braille.  The amount of information that they can input with their fingers and how quickly they do it.

Meijer:  You know that Braille is processed in the visual cortex.  If you started practicing Braille long ago.  There was one study about that which studied a lady who had been fluent in reading Braille.  She had a stroke in her visual cortex and after that happened, she could no longer read Braille.  So that proved that the visual cortex was heavily involved in processing Braille.  And the general picture among neuroscientists is that the visual cortex is not necessarily a visual cortex but a very advanced spatial processor.  So, anything that involves processing of fine spatial detail tends to get processed in the visual cortex if available.

We also find that anecdotal reports from blind people differ in how visual they experience the vOICe.  There are some people who really insist that the sensation of working with the soundscapes of vOICe is truly visual with light perception.  That’s what they say.  It cannot be scientifically proven, but that’s what they say.  But, for the most part, most blind people, most blind users of the vOICe, no it’s still auditory, but I can use it to visually interpret thing.  But the experience, the sensation as such remains auditory.

I once visited Arizona, the desert, with a blind user of the vOICe.  Looking up, she was struck by seeing the streaks of an airplane in the sky.  It was something we take for granted, but for her it was something huge, because normally as a blind person you cannot perceive that at all.  Same thing with clouds.

Just a few days ago one of the blind users using the vOICe while surfing.  He looks at the sea, waits for the right moment that a certain wave – he has to be at the right distance and then makes a run for it with his surfboard.  Those things I wouldn’t have thought of, but people come up with all sorts of applications.  So much depends on not only whether it works, but whether people are willing to spend the effort to master the vOICe.  It’s like learning a foreign language.  It takes years of insistent – systematic effort to make  the best of it.  And the skill level that you then reach is that the sufficient to warrant all this effort?  That’s still an open question and I cannot predict that.  We can know that in hindsight and maybe the vOICe in a few decades play the role of Braille, lets say, for general vision instead of reading.  I don’t know.  I can’t predict that.  I can only try and see how far we get.

Roberts:  That’s an approach that Erik Weihenmayer would approve of.

Weihenmayer:  On Mt. Everest, the Sherpas say the natured mind is like water.  If you do not disturb it, it becomes clear.  And I think of that mantra quite a bit when I’m overwhelmed.  Basically, the brain wants to flee.  The amygdala of the brain is fight or flight.  Your brain is biology.  It’s chemistry.  It’s hormones.  So of course, it’s going to tell you, get the hell out of there.  This is not a place you should be.  Go down and eat some cheeseburgers at sea level.  But this quote speaks to me to the idea like, let all that fear and doubt and chaos and weight out of your mind and just understand what you’re doing and why you’re doing it one step at a time.  Celebrate every single step along the way no matter how high you get.  And don’t let all the weight of that cloudy water pull you away from what you feel at that deeper level what you should be doing with your life and your time.  There’s a level of risk going onto a high mountain or kayaking the Grand Canyon.

For me, I want to be out in the flow of the river.  I want to be out in the current.  If you want to really, really save life you can hang out on the couch and you can watch Netflix.  But I think most people want to be out there in the thick of things.  They want to be in the food fight.  That’s the way I’ve always been.

Roberts:  Erik Weihenmayer uses every tool in his toolbox to achieve his goals.  It’s an oversimplification to say that he relies on his hearing to compensate for his lost vision, though he does that, too.  And he’s committed his life to helping others overcome their own obstacles.  Marshall any and all the relevant tools, technologies, systems, partnerships and teams to live a life with no barriers.

In some ways, that’s the story of Peter Meijer and the vOICe.  And the stories of countless neuroscientists and technologists working on sensory substitution and new approaches to access the human brain’s untapped potential.  Any journey worth taking, whether it be the summit mountains or unlock the mysteries of the brain will present challenges.  As adventurers its our job to chart past over, under and around these.  Not just for ourselves, but for those who follow.  And when new technologies compliment discoveries about what we are inherently capable of we may feel like we’re on top of the world.

I’m Dr. Cal Roberts and this is On Tech & Vision.  Thanks for listening.

Did this episode spark ideas for you?  Let us know at podcasts@lighthouseguild.org.  And if you liked this episode please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts.

I’m Dr. Cal Roberts.  On Tech & Vision is produced by Lighthouse Guild.  For more information visit www.lighthouseguild.org.  On Tech & Vision with Dr. Cal Roberts is produced at Lighthouse Guild by my colleagues Jaine Schmidt and Annemarie O’Hearn.  My thanks to Podfly for their production support.

Join our Mission

Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.