Don’t Be Fooled By Elon Musk’s AI Hype

It could take decades of research and computer evolution before we have anything even resembling human-level autonomous driving.

Man is a slow, sloppy, and brilliant thinker; computers are fast, accurate, and stupid. — John Pfeiffer

Elon Musk boasted in a recent podcast that Tesla’s cars will be fully autonomous by the end of 2020. By that, he meant a person will be able to literally sleep at the wheel as their car drives them to their destination. Some might recall that Musk made a similar prediction last year, and the year before, and the year before that. In fact, he’s been saying full self-drive will happen “next year” since all the way back in 2014 and has been wrong every time. This latest prediction will also prove wrong because Artificial Intelligence, or AI, which is at the core of Tesla’s Autopilot technology, is still in its infancy.

Most of what’s being called “AI” today is a family of fairly primitive algorithms called Artificial Neural Networks or ANNs for short. While they might seem like magic, ANNs are in fact nothing more than simple mathematical functions that multiply inputs by learned weights to produce outputs. It’s important to point out that this “learning” is nothing like the sort of learning that occurs in biological neural networks like the brain. Rather, it’s a vastly inferior gradient descent-based learning mechanism known as backpropagation.

Backpropagation is the workhorse of all modern ANNs — it’s what refines their weights so that their predicted outputs approach the desired outputs. This learning technique is laughably inefficient, often requiring an absurd amount of training data and computational resources to learn trivial tasks that children can learn with just a few exposures. It’s also fragile and stupid, frequently breaking when confronted with inputs that differ — even just slightly — from the inputs used in training, a huge safety issue in the case of self-driving cars.

Tesla’s Autopilot, supposedly trained on billions of miles of human driving data, is a perfect example of what I mean. It primarily relies on camera data, analyzed by ANNs, to keep centered within its lane. This technology performs decently on ideal highway-type roads with clear lane markings. However, when navigating roads where the lane markings aren’t clear or are altered in some minor way, Autopilot easily gets confused.

This was demonstrated in a recent study published by Keen Labs, where researchers devised a simple trick that caused an Autopilot-controlled Tesla to steer into oncoming traffic. They did this not, as you may think, by hacking into the car’s onboard computing system, but by simply creating a fake lane. As illustrated below, the researchers placed three small markings on the road. Autopilot detected them as a line that indicated the lane was shifting to the left. As a result, Autopilot blindly steered in that direction. It’s an obvious mistake a normal human driver would never make.

Source: Keen Labs

Unfortunately, such stupidity isn’t unique to Autopilot. Other “state-of-the-art” autonomous driving systems are just as bad, if not worse. They’ve been shown to make all sorts of serious blunders, like mistaking a stop sign with graffiti as a speed limit sign or mistaking a simple pattern of yellow and black stripes for a school bus. Having built, trained, and tested thousands of ANNs myself, I can assure you that they remain vulnerable to these mistakes even when explicitly trained to avoid them — and this is backed up by numerous well-cited studies such as this one and this one and this one.

Simply put, the self-driving cars we have today are suicide machines that can barely navigate simple highways, let alone complex intersections and congested parking lots. These machines don’t have a robust understanding of their surroundings nor the flexibility to rapidly adapt to novel situations. It’s unlikely that you can just learn this stuff using backpropagation-based ANNs. We’ll probably need better algorithms, perhaps ones that more closely mimic the brain’s neural architecture. However, considering how little we understand about the brain, such advancements could take at least many decades to become feasible.

This is a problem for Tesla because its Autopilot has been marketed as the future of autonomous driving. With this future now turning out to be far more distant than promised, demand for the company’s vehicles, which has plummeted in recent months, could tumble even further. Let’s also not forget that “safety” has been a key selling point for this technology. Given the recent spike in Autopilot-caused crashes and mounting lawsuits, this selling point has proven to be — as consumer advocacy groups rightfully put it — “deceptive and misleading.”

read original article at https://medium.com/@borismarjanovic/dont-be-fooled-by-elon-musk-s-ai-hype-f97cd916a3ea?source=rss——artificial_intelligence-5