On Tech & Vision Podcast

Smart Cities and Autonomous Driving: How Technology is Providing Greater Freedom of Movement for People with Vision Loss

Navigating the world can be difficult for anyone, whether or not they have vision loss. Tasks like driving safely through a city, navigating a busy airport, or finding the right bus stop all provide unique challenges. Thankfully, advances in technology are giving people more freedom of movement than ever before, allowing them to get where they want, when they want, safely. Smart Cities are putting data collection to work in a healthy way by providing information to make busy intersections more secure, sidewalks more accessible, and navigation more accurate. They’re providing assistance for all aspects of travel, from front door to the so-called “last hundred feet,” while using automated technology to make life easier every step of the way. And although fully autonomous vehicles are still on the horizon, the technology being used to develop them is being applied to improve other aspects of life in incredible ways. These applications are making the world more accessible, safer, and better for everyone, including people who are blind or visually impaired. One example of this is Dan Parker, the “World’s Fastest Blind Man” who has developed sophisticated guidance systems for his racing vehicles, as well as a semi-autonomous bicycle that could give people with vision loss a new way to navigate the world safely and independently.

Podcast Transcription

Parker:  Hello. I’m Dan Parker from Columbus, Georgia.  I’m the World’s Fastest Blind Man at 211.043 mph.

Roberts:  Dan is a professional race car driver and machinist. At only eight years old he started racing motorcycles and by the time he became an adult, Dan was racing some of the fastest cars on earth. 

Parker: When I was 27 years old I was giving the opportunity to drive what’s called a pro modified car. So, those are the fastest cars on planet Earth that still have a working left side door. Zero to 200 mph in four seconds. 

Then on March 31st, 2012, we were testing a brand new motor in Steele, AL and on the first full pass the car made a very violent hard to turn in the eighth mile mark from the left lane all the way into the right lane, and I impacted the right wall, almost head on. They got me to the hospital, and after several hours trying to get me stable, they came out, told the team that I was in very bad shape. You know, they would assess it day by day. 

So, I was in an induced coma for two weeks. They brought me out of the coma and it was weird. My fiancé and Jennifer, my sister, would notice if I was laying in the hospital bed, if they walked up beside me, I didn’t realize they were there, but as soon as they spoke it would startle me, you know, and they knew something was wrong. My pupils weren’t constricting. And they called the doctor in the next day and the ophthalmologist came in and he put my head in the machine and he says, what do you see? I said, I don’t see anything. I just feel the heat off the lights.

And he looked at my eyes and he said., your brain has swollen, compressed your optic nerve and now you’re 100% blind for life. In that moment, you know, my gut just sank.

Roberts: Dan’s vision loss was hard for him to handle, but he refused to let it stop him from racing. 

Parker: One night I went to bed thinking about my late brother who passed away in ’09. He always loved the Bonneville Salt Flats. So, the Bonneville Salt Flat is a dried salt lake bed 120 miles West of Salt Lake City, UT. So, I guess thousands of years ago it was a lake, and for whatever reason, it evaporated and all the salt settled to the lake bottom. It is perfectly smooth and is approximately 10 miles wide and 20 miles long. 

People started setting land speed records there in 1914, and he told me just the coolest story about four guys from France built the 50cc motorcycle and they decided they could take it apart, put it in their luggage, they flew down the stage, rented a car, drove to Bonneville, put together the bike, and they each got a record. Just the coolest story. 

And I was thinking about my mom, who passed away just six months before my wreck. And you know.  And so, I went to bed one night thinking about them, and I woke up with the most vivid dream that I would build a motorcycle and be the first blind man to race at Bonneville.  And I never went back to sleep that night. That was a turning point of my blindness because at that point I had a purpose, I had something to work for, had somebody push myself for. And I started reaching out to other racing friends and getting directions and making connections. 

Roberts: Many racers who are blind or visually impaired have another person in a follow vehicle giving them directions through a radio. But Dan had another idea. 

Parker: Well, early on in the design of the motorcycle, I reached out to a good friend of mine, Boeingengineer Patrick Johnson. I just told him, I need a guidance system or something to give me audible feedback to tell me how I could stay on courses.  You know what corrections I need to make steering.  And Patrick’s exact words were, “Oh, that’s easy. Start building the motorcycle. I got your back.”

And so, the guidance system plots the centercourse through GPS. If I go one foot right or left I get a tone in that ear. The further off center I go, the tone increases. And that’s what I hear. So, the tone is constant, but just maybe in my right or left ear. I know I’m going to parallel, I’m going straight. I can start trying to make it back in on center, but I’m not in danger, I’m not zigzagging. So, it it’s a lot of practice. It takes a lot of nerve. And a lot of trust, you know, not only am I trusting in a computer everything I rely on to go straight, and I trust Patrick with my life that you know, his engineering has put forth into this amazing guidance system that it worked perfect. 

Roughly 10 months after the designed the motorcycle with the help of National Federation of the Blind in 2013, I became the first blind man to race Bonneville with an average speed of 55.331 mph and I returned in 2014 and set my official FIM class record with no exemptions for blindness. It’s 62.05 mph.  And the way I was able to race and get a record, no exemptions from blindness. is still today – I’m the only blind land speed racer in the world that races with no human assistant. 

Roberts: I’m Dr. Cal Roberts and this is On Tech and Vision. Today’s big idea is freedom of movement, a holistic approach that considers safety, efficiency, equity, and accessibility. Everyone should feel safe and secure wherever they go. Whether that’s through a city, down the street or even just walking around the house, and for people who are blind or visually impaired, the use of technology to achieve this feeling of security is especially important. 

To that end, there are some incredibly talented researchers. Working on how to harness technology to enhance our ability to navigate the world with confidence. One of those people is Greg McGuire, the managing director of Mcity, an organization based out of the University of Michigan. Mcity is a combination of public and private entities that conducts research on the future of mobility, transportation, and accessibility. That research coalesces into the concept of a smart city. I asked Greg to tell me more about it. 

What does that term mean? Smart city? 

McGuire: Oh, I don’t know. I’ve begun thinking about Agile cities or other terms there because I think smart has been diluted so much that we don’t – We all think about, all right, we’re collecting data, but what does that mean? What are you doing with that? How are you processing it? How are you keeping the privacy expectations of your residents? How are you taking that into consideration? But the way I think of smart cities really is. The use of technology and data to improve the efficiency and improve the safety of that community for all of its residents. 

Roberts: So, transportation is a critical factor for people who are blind and visually impaired. It is fundamental to their ability to navigate their city, their street. Tell us about what you’re thinking about in terms of opportunities in new technology for people who are blind and visually impaired. 

McGuire: I think one of the key pillars of Mcity is, is around accessibility. For us, the four areas that we think about are safety, efficiency, equity and accessibility and I think one of the great things about accessibility, you know, that the more broadly that we can make transportation systems available to as many of us as possible, if we’re targeting groups with low vision or we’re targeting people who use wheelchairs, you know, we’re also targeting moms and dads who are pushing strollers. There’s a real value in focusing on making our systems as broadly useful as possible. 

I think of the assistive devices of 20 years ago. We used to have kind of one off devices that were $20-$30,000, right, that performed specific tasks helping magnify, you know, for reading things like that. And today those systems are available now in in smartphones. The same thing is happening in transportation. Smartphones can give you directions quite accurately. You can have apps that can pick you up and take you places. Those are the kinds of improvements in systems that we like to look at here. 

There’s a whole variety from intersection awareness of humans in cities, sensing technologies there, to automated features in driving vehicles to something I call micro directions, but the ability for, I think, of the next kind of Google or Apple Maps being able to route you down sidewalks that they know are able to accommodate your assistive devices if you’re using a wheelchair for example. And so, these are all kinds of sorts of things that we like to look at here. 

Roberts: So, one of the technologies that we believe has a real role for people who are blind and visually impaired is a technology called OrCam. OrCam comes out of Israel where they were working on autonomous driving cars. So, what does an autonomous or self-driving car need to know? What’s in front of me? What’s behind me? What’s to my right? What’s to my left? Is it a dog? Is it a tree, is it a stop sign? Is there a pothole in front of me? Did the light turn green, right? 

And the founder of this company realized that’s the same thing the person who is blind needs to know on the street and therefore created this company OrCam to take much of the technology or software that they learned building autonomous driving cars and now just repurpose it into a device to help people who are blind to visually impaired. 

As you think of technology, and I love this one that you’re just talking about now, about detection and the crosswalks, the beauty of technology. Is when either technology that was designed for everyone can then be repurposed for people or blind and visually impaired and even more beautiful when technology that was originally designed for people who are blind and visually impaired now becomes mainstream. Tell us about that and what you think about in that frame. 

McGuire: Yeah, so my favorite applications of technology are when they can be adopted in ways in which you don’t need to opt into their use. I don’t need to own an iPhone, for example, to be safe as I walk around my city. These are systems that are built, and the way that they’re implemented makes them available for everyone. So, an example to go to your automated driving example by the way, I think that’s a fantastic vocation and not a trivial one, right? 

Teaching machines how to recognize what they’re looking at in the world around them is right at the frontier of computer science and engineering. We’ve gotten pretty good at it with AI in the last decade or so, but humans, we still have the edge. We’ve got billions of years of evolution, and so it’s actually a really hard thing to make assumptions about our world. We can train systems to recognize a deer very well. And then does it transfer over to a horse? Maybe not. Or it fails in other mysterious ways. 

So, you know, we’ve got a lot left to do there. But if we can take that technology, which is already quite good, and apply it, say, to intersections. There are 300 and something thousand intersections in the US that have traffic lights and about half of the roadway fatalities in the US every year occur in those intersections. By the way, whether you’re in a car or walking or biking. So, let’s take the same technology, the ability to take sensors, put them up in intersections and understand potentially dangerous situations that are occurring and take action to avoid, you know, crash. 

Perhaps that’s alerting vehicles, alerting the driver of that vehicle, whether it’s a human or a machine, that’s it’s on an intersecting trajectory with another human. But it also might be really just, I hate to say mundane but more, you know, basic than that. We can use that information to look at situations of near collision. For every crash that we see in the intersection in the US, there’s a hundred near misses. So, if cities are trying to make their intersection safer for us, we can give them that near miss information. They can use that right away. Maybe they want to re-stripe a crosswalk or put in an assistive device in there that has audible crossing alerts. 

Does that help improve the safety of that intersection? Well, we don’t have to wait until crashes happened, then we can look at driving behavior, human behavior at those intersections right away. So, we can get fancy with the applications, but I think there’s also some really interesting immediate use cases to help right now with the application of that kind of intelligence. 

Roberts: We see a lot of technology now. They’re using haptics. It’s a wearable and it’ll give you a little buzz to let you know what you have to do or turn or that type of thing as you’re navigating your smart city. How do you see technology communicating with its users? 

McGuire: I think one of the interesting challenges of technology communicating with its users right now is actually around consistency. There’s this whole branch of human factors research that we actually end up funding some quite a bit of research in here at Mcity around those kinds of interfaces. 

One of the situations that we’ve looked at is how you actually communicate with the human when you’re doing something that requires their vigilance but is relatively mundane, and driving is a great example of that. So, if you’re going to take over some of that driving task, but then you suddenly need the human back to make a decision. As soon as that system is operating under its own control for a little while, we check out as humans, you know we’re doing something else. 

Really, the communication is more about the system being able to monitor us and know that we’re still engaged. High tracking, gestures, head position. Those kinds of things. And so there’s communications to the actual human, but then there’s also monitoring the human and seeing what we’re doing and making sure that the system understands our level of engagement with it. 

I think that’s just as important and especially as we start to talk about automated features and driving, for example, where you’re moving in a vehicle with high kinetic energy around in a city. 

Roberts: So, Uber and Lyft and technologies like that have been a huge boon for the visually impaired community because they can get a car, it can come to them. They don’t have to go out in the street and hail a taxi that they can’t see. So, it’s been great. So now people say to me, OK, I love Uber. But how about just a self-driving car for me? From what I hear you say I don’t think we’re that close to people who are blind in a self-driving car. 

McGuire: I think you are correct in the short term. Having self-driving robo-taxis running around all over America I think remains a future. A potential future, I’ll say. But I do think that these technologies that we’ve developed to try and have vehicles that can handle the driving task will have lots of other applications even including in automation and driving. If you look at the auto industry now, they’re flipping into systems that can handle most of the driving task at low speeds in cities and they’re getting into situations where they’re willing to take on the liability in fact of that automation. That means the driver doesn’t need to be fully engaged all the time, so they are getting there. 

What I’m really interested in for automation features is actually the way in which the vehicle can provide safety to the road users around it. So, in pursuit of developing these things you mentioned earlier, is it the tree, is it a box in the road, is it a child? We can do things now like automatically put on the brakes or alert the human driver. Those things are going to get more sophisticated. Much, much more sophisticated. And the value there is for all of the rest of the people using streets, using our roads, it will add to the safety for all of us. So, I’m optimistic about the automated technologies. 

Roberts: Truly automated driving would be a game changer for people who are blind or visually impaired. But as Greg said, we haven’t yet achieved fully driving vehicles. I wanted to learn more about where we are in the process. I talked to John Dolan, the principal system scientist at Carnegie Mellon ‘s Autonomous Driving Vehicle Research Center. 

What I found fascinating as I was reading about your research, is you talk about 3 themes. The multimodal 3D scene understanding, the naturalistic behavior modeling, and the scalable end to end learning. Explain those three to me, and tell me how this all applies to what you’re trying to accomplish. 

Dolan: Sure. So, let’s take them one at a time. Again, if you give me the first one again, I’ll start with that one I guess. 

Roberts: OK, multimodal 3D scene understanding.

Dolan: Right. So, what that refers to is you’ve got multiple sensors. So those are the multiple modes. You’ve got vision and LIDAR or laser sensors and radar although the ones that are probably most commonly merged together would be the first two, the the vision and the LIDAR. So, there’s a whole area of research devoted to fusing those sensors together. With the laser sensor, you get very good shape and range data, but you don’t get any color or texture in the scene that we get with our eyes. 

On the other hand, the camera which is much cheaper than the laser sensor does give you a lot of those things that I just mentioned, but it doesn’t give you range information unless you have the stereo camera pair which is difficult to calibrate, more expensive. So, putting these two together, you could imagine maybe using cheaper LIDAR sensors and adding the vision information in order to sort of color the LIDAR scene and that’s one of the challenges that researchers are currently tackling.

So, you have all this information, you want to be able to interpret it every way similar to that in which humans do. So, we look out on the scene, we look at our living room, we see that there are chairs or tables, we understand their functions. We know that we don’t want to run into them, but we do want to walk in the walkable space. Similarly, a car wants to drive in the drivable space. So that’s the basic idea behind that. How about the second one? 

Roberts: So, the second one was naturalistic behavior modeling. 

Dolan: Right. So, I guess the key way in which this applies to autonomous driving is that we’re going to be in a period during which there are going to be human driven and autonomous cars on the road for some long period of time and we’re not going to have 100% autonomy overnight. So, we need to understand how those other human driven vehicles are going to respond to us. And how we should act so that we’re not surprising people with our action. We’re not too robotic, I guess we use that term as humans to describe something that is following rules without any flexibility and doesn’t take the kinds of things into account that we take into account as humans. So, that’s part of it. 

I’ll just give you one or two simple examples. One thing that we’ve been working on for some time in my lab is behaviors at ramp merge points on the highway, and if we’re going to get to the merge point at the same time as another car we have to do something. We have to speed up, slow down, maybe make some kind of gesture. It turns out that’s a fairly tricky thing to do for robots. It seems very easy to us as humans, but if we use a standard radar based adaptive cruise control, that’s not going to work safely in those situations. So, we’re working on that for a while. 

And then intersections is another area where it’s pretty difficult to accurately and safely convey intent and we have certain intersections that are not perfectly governed by the rules. So, for example, we make a left turn without a left green arrow, then there’s some subjectivity. Should we go or not? Is that car that’s coming too close? And so those are the kinds of things we’re trying to deal with in governing human behavior models. 

And just as a technical note, there’s really no single model right now that is regarded as being sufficient to describe the way that humans drive. We do have a pretty good model that the Germans did based on Autobahn data back around 2000, which describes driving on the highway in a single lane, but changing lanes, intersection behavior, all of these other things, they’re pretty difficult to model. We don’t really have a single good model that everybody uses. 

Parker: Hello, I’m Dan Parker, the fastest blind man in the world and you’re listening to the On Tech & Vision podcast. I encourage everyone to listed On Tech Podcast because it’s sharing that everyday changing of the world of technology that will help our lives of blindness. 

Roberts: Regardless of how advanced self-driving tech gets, a person will still need to interact with the machine. This is one of John’s areas of expertise. He calls it the Human Robot Interface. 

So, when we talk about the human robot interface, which is something that I know that you are quite the expert on, describe that interface. Tell me what that what that means. 

Dolan: Well, in general for robotics, not just autonomous cars, it has to do with, on the one hand, humans appropriately conveying information to robots to let them know what they want to do, and that also includes the ideas of robots and humans interacting safely within a common workspace. So for example, one of my colleagues is working on having humans and industrial robots be in the same workspace and be able to coexist without the robots hurting the humans. Of course, there’s some aspect also of the the robot giving information back to the human at an intelligible and timely way.

Specifically, for autonomous driving it involves this question of how do you allow, particularly if you’re in a situation which we aren’t currently in, where the vehicle can’t be 100% autonomous all the time, how does the vehicle let the human know when it’s time for the human to intervene? And then how does the human smoothly hand off autonomy back to the vehicle? And that’s a challenging problem, which I would say has not been perfectly solved. 

Roberts: So of course, At Lighthouse Guild we’re so interested in how technology can help people who are blind and visually impaired. And then so the question comes up is when you’re talking about how a human helps the robot, how much of that is the human vision? 

Dolan: Well, certainly in the autonomous driving case the vision would be very important in order for a human being to size up the situation that maybe the robot is saying is dangerous. I think a person with impaired vision or or with loss of vision could still at least respond to the robots, let’s say emergency call by instructing the robot to come to a safe stop, which is one of the things that we always try to include in our robot planner and then perhaps to ask for additional help that would be outside the vehicle once the vehicle stopped. 

I would think that visually impaired people probably would not want to start the vehicle again in that case because the robot saying, hey, I’ve got a difficult situation that I can’t handle. 

Roberts: Even though the fully autonomous technology John describes is still developing, there are practical applications when it comes to transportation for people who are blind or visually impaired. In fact, when Dan Parker set his land speed record in a custom-built Corvette, he used a sophisticated guidance system based on some of the technology John described. 

Parker: So, we had to do some major upgrades to the guidance system from the motorcycle.  The motorcycle guidance system was solely GPS based. Well, at 200 miles an hour you’re going to football field per second. So, we upgraded our computer so it now calculates a run a little over 100 times per second and then had to redesign everything from scratch. 

It has a gyro in the car that can measure one 10th of one degree of yaw in any direction. So, the computer is taking samples from the GPS sensors in the car and the gyro and calculating 100 times per second, giving me a constant feedback and the 300 feet before the finish line the computer calls out parachute and with my pinky I have a paddle shifter on my pinky on my left side. I’ll pull that paddle that deploys the parachute.  The guidance system at Cornell is very, very sophisticated. 

Roberts: As Dan has proved, there’s incredible technology being designed to make transportation easier and accessible for people who are blind or visually impaired. So, give us some examples of great technology that you think has a application in the smart city.

McGuire: I’ve been interested lately in this ultra-wide band technology. Now, one of our industry partners Verizon has what UWB in in their 5G as a branding, but I’m talking about the ultra-wide band that’s used to find your keys. Or, you know, you put these little air tags on your luggage and it leads you right to it. UWB is a very interesting spread spectrum RF technology. It’s baked into smartphones now and it has the ability to kind of resolve distances down to under a foot. So, I’m really interested in the application of that technology for the so-called last 100 feet of mobility. 

If you are boarding a bus and you’re blind or low vision, where’s the door? Because the bus doesn’t always pull up to the same place every time. Is it safe to walk down this sidewalk? Which entrance to use to get into and out of the restaurant? Those are things that UWB plus the various existing mapping technologies can handle quite nicely, and I’m really interested in that as a way to navigate. If you’ve ever used GPS in a city as soon as you get around tall buildings it might have you on the wrong side of the street. So, it depends on the satellites you can see and when there are fifty story buildings around you, that’s hard. And so UWB is a more localized technology. It is a beacon based direct kind of like Bluetooth but a little longer range. So, I’m excited by that. 

When those technologies commoditize and phone manufacturers are putting those systems into their phones, you have the immediate broad audience of consumers who can take advantage of those and so now it’s up to us to bring the infrastructure up to match.

Roberts: So, one of the outcomes of the pandemic is that most of us rely on delivery much more than we did before the pandemic. Yeah. And so, I remember even going back to years ago when FedEx was looking at how they can deliver packages, whether delivering by drones or with parachutes or things like that. But now we’re all concerned about how our food is going be delivered and how fast it’s going get there from restaurants and things. How do you look at the delivery issue? 

McGuire: I’m excited about the delivery issue, to be honest with you, because I think it’s got the most opportunity for improving our cities and our lives in the near term. We’ve got a company spun out of the University of Michigan here called Refraction AI who’s doing little robots that ride down the side of the street at low speeds and they’ll deliver food to you from restaurants. They’ll deliver medicine, they’ll deliver milk from the grocery store. But they are battery powered and they’re small. You don’t need a giant aluminum panel van to deliver everything. 

Roberts: I asked John Dolan the same question about technological concepts that can be applied to real world situations. So, how do we make this crossover? How do we take technology that was designed, say for one purpose, say an autonomous driving car and make it applicable to people who are blind and visually impaired? 

Dolan: Yeah, that’s a great question. One of my colleagues worked on a project that I then also had some exposure to through the Master’s program we direct, which was a smart suitcase for visually impaired people that they could take with them that contained some of the technologies that we’re talking about. So, they go to the airport, for example. That was one of the examples of the capabilities that was used in this project. And the suitcase would have in it in some ways similar to a seeing eye dog, it had a camera that had some other sensors that allow people to navigate through spaces, sometimes crowded spaces with other pedestrians and it had a both a speech recognition capability then also a speech generation capability in order to able to interact with the user and let him or her know what was going on.

So, I think that’s a a good example of it. Of course, there are some issues. I mean, on the one hand, because you’re not traveling at high speeds, you probably don’t need to have as high capabilities, but you can’t pack as much into a suitcase size object as you can into a vehicle or nor do you have as much ability to provide power and things like that. So, those are the kinds of tradeoffs you have to work with as an engineer. 

Roberts: So, the technology that we use to identify objects for the autonomous driving car, therefore gets used for navigation. You said in an in an airport. How about just navigating in your home to know where the chairs are, where the table is, and how to find the right half to get to another room? 

Dolan: Sure, absolutely. There’s no reason why it could be done. There are some tricks, I think, or at least some considerations that would arise in terms of where to mount it or how to place it as a human being. Like you could imagine putting such a sensor on a Roomba vacuum cleaner. A robot vacuum cleaner, which tends to maintain a sort of level surface. If you just put it on a human being, the human being has so many different degrees of freedom that it might be sort of rocking around in the environment and not have a really good frame of reference. You have to think about how that would work out. But in theory, yeah, it would be applicable for exactly what you said. 

Roberts: So, look to the future for me. What can I expect five years from now and 10 years from now? 

Dolan: Well, I think for autonomous driving, we have to be honest and say that we’re in a bit of a winter right now, which I think is partly caused by the general technological downturn or tech sector downturn. So there’s some headwinds that the the companies are facing because of that and then also because the safety problem has not been fully solved and those companies are not bringing revenues in, 

Now, my belief about that is that we will weather it. I believe the technology is on its way in one form or another. 

Roberts: He’s right. An alternative form of autonomous transportation is being designed right now by Dan Parker and his business partner Patrick Johnson. And it’s not something that will come in the far-off future. 

Parker: Our next goal is a semi-autonomous bicycle. And so, what we want to do is – A tadpole bike is a very low sitting three-wheel bike. Two wheels in the front, one in the back. It’s very sturdy. So, we want to build a semi-autonomous bicycle that has obstacle avoidance, navigation and the person peddling to, you know, get from point A to point B that’ll provide exercise, which everybody needs, but especially us blind people. 

It would increase socialization. And then you could, you know, go shopping. You could go get to a job, go out to eat, whatever you want to do. We have everything designed how to do it. It’ll be 3 levels. So, the first guidance system will have two bicycles. So, let’s say my fiancé Jennifer wants to ride the lead bike that willhave a computer on it gathering GPS data and will drop an electronic breadcrumb trail. And my bike which follows it, becomes the steering.

The second level is I live on a typical city block. So, this day I want to pedal around the block for exercise.  I can select this city block pedal out of the driveway and go around the block as many times as I want.  It’ll tell me every time I’m approaching the house. So do I want to pull the driveway or do you want to continue riding, make more laps?

And then the level 3 will be, let’s say there’s a Chick-fil-A approximately 2 miles from my house through the neighborhood.  So, someone will get on my bike, drive it to the Chick-fil-A and the computer will gather all that data, then ride it back. It’ll be programmed. So, if I want to go to lunch, let’s go to Chick-fil-A, go to pedal, and then I could go down and eat some lunch. 

It’ll have LIDAR sensors and cameras just like an autonomous car. And it will have obstacle avoidance and navigation. Then I can still, using my hearing, I can help make judgment calls on things like if I could hear a construction site come up, then one of the things we’re going to have is where I could stop and go YouTube live. So, let’s say Patrick, he can remote into my bicycle or one of my friends, my fiancée and say, hey, it sounds like there’s something up ahead, I’m just not quite sure of. And they can say, yeah, there’s no big deal and they can stay on the phone with me while I’m getting through that area or they can say there’s a massive car wreck ahead. There’s fire trucks and an ambulance.  Listen, just turn around, come back home. They could help me make those judgment calls if I need to. 

Because we know autonomous technology is increasing every day and it is coming. You know, 100% it’s coming. And so, you know transportation’s freedom. 

Roberts: That’s exactly right. At the end of the day, assistive technology is all about freedom. It gives you the ability to have freedom of movement and to feel safe wherever you go, whether it’s traveling to your favorite restaurant and ordering dinner or breaking land speed records across salt flats. The technology we discussed today will make that possible for everyone, no matter their visual capacity. 

Did this episode spark ideas for you?  Let us know at podcasts@lighthouseguild.org.  And if you liked this episode please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts. 

I’m Dr. Cal Roberts.  On Tech & Vision is produced by Lighthouse Guild.  For more information visit www.lighthouseguild.org.  On Tech & Vision with Dr. Cal Roberts is produced at Lighthouse Guild by my colleagues Jaine Schmidt and Annemarie O’Hearn.  My thanks to Podfly for their production support.

Join our Mission

Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.