jerrytz
JF-Expert Member
- Oct 10, 2012
- 5,975
- 4,266
Recently, companies like Google, Uber, and Tesla have presented us with an alternative answer: Artificially intelligent computers should drive our cars. Theyd do a far safer job of it, and wed be free to spend our commutes doing other things.
There are, however, at least two big problems with computers driving our cars.
One is that, while they may drive flawlessly on well-marked and well-mapped roads in fair weather, they lag behind humans in their ability to interpret and respond to novel situations. They might balk at a stalled car in the road ahead, or a police officer directing traffic at a malfunctioning stoplight. Or they might be taught to handle those encounters deftly, only to be flummoxed by something as mundane as a change in lane striping on a familiar thoroughfare. Orwho knows?they might get hacked en masse, causing deadlier pileups than humans would ever blunder into on our own.
Which leads us to the second problem with computers driving our cars: We just dont fully trust them yet, and we arent likely to anytime soon. Several states have passed laws that allow for the testing of self-driving cars on public roadways. In most cases, they require that a licensed human remain behind the wheel, ready to take over at a moments notice should anything go awry.
Engineers call this concept human in the loop.
It might sound like a reasonable compromise, at least until self-driving cars have fully earned our trust. But theres a potentially fatal flaw in the human as safety net approach: What if human drivers arent a good safety net? Were bad enough at avoiding crashes when were fully engaged behind the wheel, and far worse when were distracted by phone calls and text messages. Just imagine a driver called upon to take split-second emergency action after spending the whole trip up to that point kicking back while the car did the work. Its a problem the airline industry is already facing as concerns mount that automated cockpits may be eroding pilots flying skills.-
Google is all too aware of this problem. Thats why it recently shifted its approach to self-driving cars. It started out by developing self-driving Toyota Priuses and Lexus SUVshighway-legal production cars that could switch between autonomous and human-driving modes. Over the past two years, it has moved away from that program to focus on building a new type of autonomous vehicle that has no room for a human driver at all. Its new self-driving cars come with no steering wheel, no accelerator, no brakesin short, no way for a human to mess things up. (Well, except for the ones with whom it has to share the roads.) Google is cutting the human out of the loop.
Car companies are understandably a little wary of an approach that could put an end to driving as we know it and undermine the very institution of vehicle ownership. Their response, for the most part, has been to develop incremental driver-assistance features like adaptive cruise control while resisting the push toward fully autonomous vehicles.
There are, however, at least two big problems with computers driving our cars.
One is that, while they may drive flawlessly on well-marked and well-mapped roads in fair weather, they lag behind humans in their ability to interpret and respond to novel situations. They might balk at a stalled car in the road ahead, or a police officer directing traffic at a malfunctioning stoplight. Or they might be taught to handle those encounters deftly, only to be flummoxed by something as mundane as a change in lane striping on a familiar thoroughfare. Orwho knows?they might get hacked en masse, causing deadlier pileups than humans would ever blunder into on our own.
Which leads us to the second problem with computers driving our cars: We just dont fully trust them yet, and we arent likely to anytime soon. Several states have passed laws that allow for the testing of self-driving cars on public roadways. In most cases, they require that a licensed human remain behind the wheel, ready to take over at a moments notice should anything go awry.
Engineers call this concept human in the loop.
It might sound like a reasonable compromise, at least until self-driving cars have fully earned our trust. But theres a potentially fatal flaw in the human as safety net approach: What if human drivers arent a good safety net? Were bad enough at avoiding crashes when were fully engaged behind the wheel, and far worse when were distracted by phone calls and text messages. Just imagine a driver called upon to take split-second emergency action after spending the whole trip up to that point kicking back while the car did the work. Its a problem the airline industry is already facing as concerns mount that automated cockpits may be eroding pilots flying skills.-
Google is all too aware of this problem. Thats why it recently shifted its approach to self-driving cars. It started out by developing self-driving Toyota Priuses and Lexus SUVshighway-legal production cars that could switch between autonomous and human-driving modes. Over the past two years, it has moved away from that program to focus on building a new type of autonomous vehicle that has no room for a human driver at all. Its new self-driving cars come with no steering wheel, no accelerator, no brakesin short, no way for a human to mess things up. (Well, except for the ones with whom it has to share the roads.) Google is cutting the human out of the loop.
Car companies are understandably a little wary of an approach that could put an end to driving as we know it and undermine the very institution of vehicle ownership. Their response, for the most part, has been to develop incremental driver-assistance features like adaptive cruise control while resisting the push toward fully autonomous vehicles.