On Tech & Vision Podcast

Beyond Self Driving Cars: Technologies for Autonomous Human Navigation

On Tech and Vision Podcast with Dr. Cal Roberts

Today’s big idea is about exciting and emerging technologies that facilitate autonomous navigation for people who are blind or visually impaired. In this episode, you will meet Jason Eichenholz, Co-Founder and CTO of Luminar Technologies, and Manufacturing Engineer, Nico Gentry. Luminar’s LiDAR technology is instrumental to the development of self-driving cars, but this same technology could be useful for people with vision loss as they need to know the same information in order to navigate: “What is in front of me?” What is behind me? To the Left? To the right? You’ll hear from Thomas Panek, President and CEO of Guiding Eyes for the Blind, an avid runner who dreamed of running without assistance. He took this unmet need to a Google Hackathon. Ryan Burke, the Creative Producer at Google Creative Lab, put together a team to develop a solution that turned into Project Guideline. Kevin Yoo, Co-Founder of WearWorks Technology, is using inclusive design to develop Wayband, a navigation wristband that communicates directions using haptics.

Podcast Transcription

Beyond Self Driving Cars: Technologies for Autonomous Human Navigation

Eichenholz:  You know, you and I are having this podcast today via fiber optic networks that are connecting the two of us at the speed of light.  We are essentially doing the same things in our LIDAR systems, and we’re leveraging telecommunications components and optical electronics components that are out there, as well as technology from bar code scanners and things you see at a supermarket.  So that fundamental technology existed, but what happens is, we put the puzzle pieces together in a different way.

Roberts:  Jason Eichenholz is Co-Founder and Chief Technology Officer of Luminar, a company that makes LIDAR technologies most often deployed in cars with self-driving capabilities.  But before he got into self-driving cars, Jason cut his teeth on century technologies that were employed in other ways.

Eichenholz:  Over the course of my career I’ve developed systems for remote sensing in parts of the rain forest.  We measured explosives on the battlefield in Afghanistan.  I have a system to help find water on the moon.  And I have three of my systems running around on Mars as part of the Curiosity, looking at the elements on Mars.

Roberts:  So, what makes LIDAR unique?

Eichenholz:  The big difference of LIDAR technology over SONAR or RADAR is the wavelength of light.  Because the wavelength of light is much shorter, you’re able to get much higher spatial resolution.  You can see things as if it looks like an actual optical camera wheras the biggest challenge with RADAR technology or even much more so with SONAR, because the long, long wavelength of sound, is the fact that you are able to perceive things like a stop sign or actually a pole.  Or you can measure the distance and difference in depth from a curb versus the sidewalk itself.  So, what oyu’re able to do is have camera-like spatial resolution with RADAR-like range.  You’re getting the best of both worlds.

Gentry:  We have all these spinning mirrors and we have an extreme amount of precision that is required with this technology.

Roberts:  This is Nico Gentry.  He’s a manufacturing engineer with Luminar.  

Gentry:  You’re looking two football fields down and the difference between a soccer ball and a baseball has to be determined at that distance.  So, when you’re building these things it’s extremely, extremely precise.

Roberts:  And Nico is visually impaired.

Gentry:  I was born with nystagmus and I’ve had just a long, complicated life with trying to find my way around all of the inherent limitations that were imposed on me by having that condition.  The doctor said, “he’s not going to be able to exceed in school without help.  He’s not going to be able to do contact sports.  He’s not going to be able to drive a car.”  And I was able to finagle my way around all of those limitations except for the car part.  Which is unfortunate for somebody who loves cars as much as I do.  That was the toughest pill for me to swallow as a visually impaired person.  You’re not going to have the same freedoms as everybody else is.  How do you deal with that?

Roberts:  Nico wanted to work on the technologies that would help him have more autonomy.

Gentry:  And that’s how I ended up coming here to work with Luminar to build specifically the vision systems for cars.

Roberts:  Autonomous vehicles have taken our imaginations by storm.  And companies like Luminar are making them increasingly possible.  But when you think about it, what does an autonomous driving car really need to know?  It needs to know what’s in front of it, what’s behind it, what’s to the right, what’s to the left.  Is it a tree?  Is it a stop sign?  Is it a person?  Is there a pothole in front of me?  Did the light turn green?  These are the same things that someone who’s blind needs to know as they walk on the street.  The good news is that technologies being developed for autonomous vehicles are also paving the way for people who are blind or visually impaired to autonomously run, exercise, walk, go to the store – to fully navigate their worlds.

I’m Dr. Cal Roberts, and in this episode of On Tech and Vision we’ll explore two navigation tools that allow pedestrians who are blind to maneuver through their worlds with better safety, accuracy, freedom and dignity.

Panek:  In 2019 I ran the New York City half-marathon.

Roberts:  As President and CEO of Guiding Eyes for the Blind, avid runner Thomas Panek initiates a Running Guides program training guide dogs to run with their owners, something that hadn’t been done before.

Panek:  I’ve been running most of my life.  I ran cross country in high school and continued to run until my vision loss became a challenge for me to navigate independently.

Roberts:  Thomas lost his vision due to retinitis pigmentosa and missed the feeling of freedom he got running independently.  And while the Running Guides program was a step in the right direction, he really wanted to go faster and farther.  Rather than be limited by a guide, human or canine, he wanted to run at the peak of his ability.  Inspiration came from an unlikely place – one of his son’s toy cars.

Panek:  He had put a piece of electrical tape throughout the living room floor, and circles, and this car could follow the line.  And he said, dad, do you think there could be some technology out there that would help you be able to follow a line.  And I said, I don’t know.  That’s a really good point.  So, when Google had an open forum I came and expressed my desire to be able to run freely without any human assistance.  To run independently.  Google invited me to a Hackathon.  Hackathon is a really interesting thing.  Essentially you take a human problem and you spend the day conceiving how to solve that human problem with some very smart people.

Burke:  The scope of the overall Hackathon was this design philosophy that we call “Start With One.”

Roberts:  Ryan Burke is Creative Producer at Google Creative Lab and one of the innovators that designed the prototype that would help Thomas run independently.

Burke:  We invited individuals with acute and specific challenges.  And the idea was to try to use technology and to come up with ideas that could serve their very unique challenges.

Panek:  The first iteration was quite humorous.  I was basically a remote control human.

Burke:  After four hours of being like “okay, Thomas, you’re going to wear a 40 pound backpack.  Inside of it is going to be a supercomputer and you’re going to have this helmet on which has ten different cameras.”  I was like, I think we can do this with a mobile phone.

Roberts:  They went back to the model of the remote control car.

Burke:  If we can just put a line down, we can just measure the variance to the left and to the right with computer vision and then provide an audio signal to Thomas so that he can auto-correct.

Roberts:  In the remaining three hours of the Hackathon, the team built a prototype to prove the model.

Burke:  From there we went off on an internal spelunking mission inside of Google to find researchers who were working on video segmentation, on device machine learning, and specifically mobile vision.

Roberts:  That prototype would become Project Guideline, but Thomas, still eager to run independently, had a much grander plan.

Panek:  I said, okay, now I want to do this in Central Park.  And the team from Google just looked at me and said, let’s give it a try.  And then, the world shut down.  It became even more important to Google, to me.  I can’t even be within six feet of a running guide.  Now is the time to really dial in and see if we can make this work.

Roberts:  Project Guideline uses a smartphone camera in addition to LIDAR sensors to take in data.  Data is key.  It’s the first step in how the project guideline device works.  The team trained a neural network using computer vision.  That neural network quickly analyzes the data from the camera and converts it to useful auditory feedback.  Is the runner too far to the right of the line?  Too far to the left?

Burke:  For human eyes, a line is a line is a line.  We very quickly learned to identify an object in a world.

Roberts:  Computer vision works differently.

Burke:  In our use case, a line on black asphalt appears different than a line on washed out cement.  It looks different from a line that has a shadow casted from a tree over the top, looks different from a line that has direct sunlight and is sort of blowing out the color.  So, while we only have one model, this model has to account for an infinitude of use cases and presentations of what a line looks like in the real world.

Machine learning, as an oversimplification, it’s like a “learn by example” paradigm.  The more varied examples you give it, the more confident it is on making correct conclusions based on an example that you’ve never given it before.

Roberts:  The Project Guideline team planned to train it’s neural network on many examples outside.  In the real world.  They were ready to hit the ground running, pardon the pun, but in March of 2020, the pandemic through a curveball.

Panek:  So, how do you develop the technology from your bedroom?  Google produced a gaming version, almost like you’d think in a video game, to take footage from various places including Central Park and a local park next to my house, to think about, how could we give a machine learning algorithm this language, these videos, and start to develop a very robust system outside?

Roberts:  The team also trained the algorithm in virtual reality with measurable success.

Panek:  Between March and September/October, we’re getting pretty good at layering videos on top of algorithms to see if the phone would continue like a video game to keep me on track.

Roberts:  Thomas trained on simulations during the early part of the pandemic, learning to respond to the auditory feedback by pressing right and left as the video simulation ran.  So, all winter Thomas was using to learn the system while the algorithm was learning how to identify yellow lines in all types of conditions.

Panek:  But then it was time to try it out.  And we had a yellow line painted in Ward Pound Ridge, which is a beautiful sanctuary about a mile from where I live.  And so, my wife hopped on a tandem bike and my oldest son came out as a safety guide and I put on the headphones and was guided to the beginning of the line, which wound a quarter mile through essentially the forest, and Google engineers stood by and said – Go!

Roberts:  We’ll come back to Thomas’ run in a minute.  But let’s keep focused on the tech.  Step three in getting Project Guideline off the ground is its feedback mechanism.  How can users know when they’re too far to the left, too far to the right?

Panek:  the auditory feedback guideline, it has several dimensions to it.  The first dimension is this sound that tells me I’m in the right location.  So, it’s a very pleasant hum, I would say.  Like if someone’s humming to you.  And if I stray too far to the right of the line I will get a little bit of a higher elevated hum.  And if I’m in a danger area I’ll get almost a raspberry, kind of pushing me back toward that pleasant sound.  And there is a failsafe, if there was something in my path, like when we were running the virtual 5K through Central Park.  There was a car, the parks department stopped on the line, and so, the app said in a human-like voice “stop stop stop” and I knew right away I had to stop within a pace.

Roberts:  And project guideline uses haptics to communication directionality.

Panek:  But not in the sense that you think about a haptic like your phone vibrating.  It was a tapping using the microphones on the headset.  If I was going to have to turn left up ahead, it would start to tap me into the turns, so “tap tap tap tap tap tap.”  Almost like tapping on a mic.  And it would tell me that I had to turn left in the left ear and then right in the right ear.  So, part of the learning curve to be able to run as fast as my legs and lungs could carry me was being able to turning to those sounds and fine tuning the sounds with the Google engineering team so that they were accurate and informative.

Roberts:  Thomas and the algorithm trained together all winter to become a symbiotic human-machine partnership.  So that when Thomas did his the road, he could fly.

Panek:  That’s the first time I’ve run alone in decades.  Just through the woods by myself.  I never thought we’d be here.  I didn’t think we’d ever be here.  It’s a feeling a freedom that I’ve never felt before and I hope that other people will have that feeling, too.  What we’re really striving towards is complete independence, and this technology gave me that feeling and, it’s emotional.  It’s like you have your whole self back.

Roberts:  The Wayband by WearWorks is another navigation tool, and it was designed inclusively.

Yoo:  This is the way I see inclusive design.  Really, just truly means everybody in the world can use it the same way.  Everybody can use it the same exact way.  It’s really supposed to be meant for everyone and to be used by everyone.

Roberts:  This is Kevin Yoo, Co-Founder of WearWorks and developer of Wayband.  It’s a haptic feedback band you wear on your wrist.  It connects to the navigation apps on your phone.  It could be a map or it could be something newer.  Wayband relies on partnerships for GPS data and other services.

Yoo:  We are partnering, for example for GPS data and accuracy with North.  This is a company that we just started working with, and they’re offering us 3 centimeter accuracy with GPS data.

Roberts:  They’re also partnering with what3words, a company that has bisected the entire globe into segments 3 meters squared, and assigned each of these a three word address.

Yoo:  For example, “cat dog mouse” can be literally represented on the spot that I’m sitting now, and three other words like “wallet key pad” could be some other place that you’re sitting at right now.  It’s kind of like gamifying navigation.  Gamifying location.

Roberts:  The super specific approach that what3words affords Wayband is often more accurate than a conventional address for people navigating on foot.

Yoo:  You’re able to literally say “meet me in this block.”  Literally this 3×3 square in the world, on a bridge, in a party, in a park.  Wherever it may be.  And they can meet you exactly there.  So, that’s something that’s very exciting for us.

Roberts:  Three-centimeter accuracy for pedestrian navigation, indoor or outdoor, would be much more useful than my GPS maps.  That, and I wouldn’t have to be staring at my screen when I walk.  How did Kevin come to decide that haptic feedback was the best way for Wayband to communicate with users?

Yoo:  Haptic in general just means anything related to the means of touch.  So, anything that your skin can interact with and obtaining information through your skin is considered in the umbrella of haptics.  So, we’re a haptic platform company, and we’re generated a standardized language through touch.  The device actually is usually worn on the side of the wrist like so.  By doing that you get a better sense of the touch through technology.

Roberts:  Explain what people feel.

Yoo:  It’s the amplitude change with the vibration.  We tested through many different types of vibrations, but the main thing was scale.  The size of the device had to be small enough to be comfortable on the wrist as well as to have a powerful, crisp haptic sensation that people can understand immediately.

Roberts:  So that the person will program in advance where they want to go?

Yoo:  That’s correct.  Our lead software architect Jim, he’s a 35+ years experience in the software world, and he’s visually impaired.  His brother is fully blind.  So he joined us about two years ago and he’s been developing the app from scratch in order to have assistive technology to be integrated into it by nature, and by using voice commands, and by using screen over.

Roberts:  Explain to me, as simply as you can, how would someone know if they needed to turn right, versus turn left?

Yoo:  What we do with the orientation is, as I was describing, it doesn’t vibrate when you’re facing the correct direction.  This could be a degree of 40 degrees up to 60 degrees based on your speed.  And then, as soon as you deviate off that pathway, it’s what we call a virtual corridor, it begins to vibrate very weakly, and then gets stronger.  So it’s always gently pushing you toward the right corridor.  When you do left turn, right turn signals, there’s a tick-tock, tick-tock sound effect as well as the light itself.  For us, we’re doing that with haptics, so it kind of gives you this low vibration, stronger vibration and back and forth, exactly like a car turn signal would.  This is just to tell you that a right turn or a left turn is approaching about ten feet away.

Roberts:  And what about accuracy?

Yoo:  It’s a compass, a vibration compass, and literally as you rotate, and accuracy is very precise, we can literally guide you down a line of a curvy road by creating this Pacman-like effect.  So, what we call dew points, as soon as you collect a dew point, it will guide you to the next one.

Roberts:  How do you develop a language from scratch?

Yoo:  What we understand as skin is more of a gestured sense.  If somebody taps you on the shoulder, if somebody slaps you in the face, or if you touch a hot stove, there’s an immediate reaction our body goes through.  So, we wanted to generate that and convert it into more of a language base that what we call is haptic gestures.

Roberts:  So, how did Kevin decide that understanding haptics would be the central challenge in developing Wayband?

Yoo:  When you go to a new place, your screen-based smartphone use goes up 80%.  To eliminate that portion and to be completely hands-free and to get to places without having to take your phone out once, that’s where we started to understand the value of haptic language.  To develop this further, we created a haptic-focused engineering team that literally would just run codes and to feel sensations that would output information that could be understood very easily.

Roberts:  WearWorks beta testers feel.  They feel the pulses and vibrations that Kevin’s team designs and then report back their emotional and conceptional reactions.  And it’s through this feedback that WearWork defines terms and builds strings of gestures to communicate more complex ideas and hopefully a haptic language that is intuitive and universal.  It’s sensational!

Advances in all kinds of telecommunications, but especially digital cameras and LIDAR sensors, have allowed us to gather lots more visual data.  Better internet connectivity thanks to cloud computing makes that data readily available for analysis.  So, not only can we train machines to see, we can train them to see better and better.  All of these advances in tech have led to wayfinding tools like project Guideline and Wayband.  Such wearables will become, if they’re not already, indispensable for all of us, sighted and blind alike.  Many of us interpret haptics every day with our smartwatches.  We are learning a new language of pulses and rattles and hums.  

Thomas Panek learned to interpret the auditory feedback system that Ryan and his team invented before he sprinted through the woods unassisted.  Kevin Yoo is not just making a haptic wristband for wayfinding, he’s perfecting a new kind of language that will happen on our skin.  Our brains are capable of amazing things, as is our technology.  And every day we work in better harmony with our machines.  And this is just the beginning.  That is to say, as our machines get more intelligent, so too must we.

I’m Dr. Cal Roberts and this is On Tech & Vision.  Thanks for listening.

Did this episode spark ideas for you?  Let us know at podcasts@lighthouseguild.org.  And if you liked this episode please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts.

I’m Dr. Cal Roberts.  On Tech & Vision is produced by Lighthouse Guild.  For more information visit www.lighthouseguild.org.  On Tech & Vision with Dr. Cal Roberts is produced at Lighthouse Guild by my colleagues Jaine Schmidt and Annemarie O’Hearn.  My thanks to Podfly for their production support.

Join our Mission

Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.