The claim that true self-driving cars – machines acting entirely on their own without need for restraints—will eventually hit the road has been, without a doubt, a topic of hot debate.
In late 2018 Waymo CEO John Krafcik spoke up on the topic. Despite Waymo’s bullish outlook on autonomous vehicles, Krafcik said that true self driving cars are ‘very, very hard’ and unlikely to arrive any time soon. A common problem he highlighted was the inability for autonomous cars to drive safely without human intervention in certain weather conditions.
While the ability for autonomous vehicles to drive in challenging situations—such as heavy weather or difficult terrain—is improving with the implementation of new technology, such as MITs new Localizing Ground Penetrating Radar system, Krafcik sees current driver-less cars improving in the future but likely never becoming ubiquitous.
“You don’t know what you don’t know until you’re actually there and trying to do things,” Krafcik said, highlighting the obvious unpredictability of driving. “Autonomy always will have some constraints.”
Dr Peter Stratton—a researcher who previously worked at the Queensland Brain Institute—agrees with Krafcik’s comments, highlighting the limitations of our current self driving cars by saying “Would you trust a monkey to to go and pick your kids up from school? The answer is no way.”
Obviously self driving cars have algorithms which can technically ‘drive’, but as Dr Stratton highlights, being able to navigate basic scenarios isn’t enough.
“Now our current AI is nowhere near as smart as a monkey. It’s not even as smart as a mouse. It might be as smart as a cockroach,” he states in a 2019 interview.
Since the artificial intelligence systems that operate autonomous vehicles rely on training data, their ability to problem solve is limited. Even with large quantities of training data, an autonomous car will almost certainly face contingencies that it hasn’t seen in it’s training. These random events are common, and the level of unpredictability in the real world is high.
“There’s no way you can gather enough training data to handle every potential problem that a car is going to encounter in the real world. It’s just impossible.”
According to Dr Stratton, one of the few possible solutions is reaching artificial general intelligence – machines which can match the intellectual intelligence of humans. But even if such an intelligence is achieved with software, a human-like margin of error would likely still exist.
“You need to be as smart as a human to do that. You need something like human thought, human generalisation and human extrapolation, and this idea of applying what you currently know to novel situations.”
Transcript:
So regarding self driving cars, you know, two years ago, the general consensus out there amongst people working in the industry and in scientific research that we’re very bullish on having these cars out and working in the real world within a couple of years. I think the only person who left who’s still bullish on that is Elon Musk. You know, he’s saying, I think by the end of next year, he’ll have a million Robo taxis on the road. But most people are calling him out on that right now. So the the CEO of Waymo, which is the Google self driving car initiative, which was, you know, one of the biggest, probably still the biggest in the world. And they were, you know, incredibly bullish on this technology five years ago. The CEO has actually come out and said, it looks like we’ll never actually have fully self driving cars, ever. Right? There’s always- what he said was there’s always going to be constraints. Now, what that really means in terms of, you know, what we’ve been discussing with AGI is that to do everything that a human driver does, in driving a car, simply driving a car around the streets, you need to be as smart as human to do that. I guess the question is, would you trust a monkey to drive you somewhere? Would you trust a monkey to go and pick your kids up from school and bring them home? And the answer is no way. Now our current AI is nowhere near as smart as a monkey. Right? It’s it’s not even as smart as a mouse. It might be as smart as a cockroach. You know, the levels are hard to compare. But really, we’re not even scratching the surface of intelligence at the moment with our, with our narrow AI. So the thought that you can actually take this narrow AI and put it in a car and it could successfully drive a car, you know, in all circumstances, I think is just ridiculous. And it’s really irresponsible, not to put too fine a point on it, it’s just never going to work. The current AI that we have relies on seeing every contingency in its training data, in order to deal with that successfully, when it’s out being tested or actually doing its job in the real world. You know, we know that, we know that. If you present a current AI with anything, any inputs that hasn’t seen during its training, it will completely fail to process those inputs properly. And there’s no way you can gather enough training data to handle every potential problem that a car is going to encounter in the real world. It’s just impossible. So you need something like human thought, human generalization and human extrapolation, and this idea of applying what you currently know to novel situations. You need that in the car to make the car safe. We currently don’t have that. So that’s it for self driving cars for quite some time.