- By John R. Quain
Why Tesla's Current Cars Can't Go Fully Autonomous
There's a dirty secret that is almost universally understood in the automotive world but that no one wants to say out loud: Tesla's current vehicles cannot go fully autonomous. Not today, not in August, not in 2019.
It's a topic that comes up often, particularly since Tesla has done an impressive job encouraging the current wave of electrification. While Google's Waymo has received the lion's share of autonomous vehicle coverage, some have hoped that Tesla might end up pushing the adoption of autonomous systems as well. Indeed, Tesla CEO Elon Musk has given comfort to that idea, and he did it again recently in a tweet that mentioned Tesla "will begin to enable full self-driving features" with the August Autopilot update. Such comments might just reflect the fact that Tesla is trying to catch up to the likes of GM, but full self-driving is something Tesla's vehicles cannot and should not do--and drivers should know that.
There are a raft of cars on the road today that use what are often referred to as semi-autonomous or advanced driver assistance programs. Audi, BMW, GM, Honda, Hyundai, Mercedes-Benz, Volvo, and Tesla all have similar systems. There's nothing unique about Tesla's approach. Indeed, GM currently has the most advanced semi-autonomous driving system (Super Cruise) and the only one that actually allows drivers to keep their hands and feet off the controls for hours at a stretch.
Unfortunately, all the vehicles from all the automakers mentioned--Tesla included--are handicapped when it comes to moving toward autonomous driving: They don't have sufficient sensing hardware to drive themselves.
What vehicles use today is a camera/radar/ultrasonic (CRU) combination to see what's ahead (perception), figure out what it is (classification), and then assimilate that information to make driving decisions (brake, swerve left, etc.). Such hardware is good for adaptive cruise control and lane-keeping combination ADAS systems. Radar keeps you from hitting big objects like trucks and the camera can see (most of) the lines on the road. You can even use cameras in some emergency braking solutions (e.g., Subaru's EyeSight). But while the camera/radar/ultrasonic combination may be necessary for autonomous driving, it is woefully insufficient.
Self-driving systems need to see further ahead with better accuracy in a variety of environmental conditions (like bright sunlight or rain) and that means exceeding the capabilities of CRU systems. Video cameras as perception systems are too easily foiled and radar systems are too imprecise for making driving decisions. Ultrasonic systems only work over short distances. More data about what's further down the road is needed to make autonomous driving systems more reliable--and safe.
Consequently, what is needed are additional sensors. Today, that means adding light detection and ranging, a.k.a., lidar. But none of the current vehicles with ADAS systems--including Tesla's--have lidar sensors.
The advantage of lidar is that it can generate precise three-dimensional images of everything from cars to trees to cyclists in a variety of environments and under a variety of lighting conditions. Unlike cameras, for example, lidar cannot be fooled by shadows or blinded by bright sunlight. Indeed, lidar systems can see things humans cannot, whereas we can see things video systems cannot. (Depending on the nanometer wavelengths used, lidar can have trouble with water.)
(NB: Yes, vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) communication will greatly improve safety, traffic flow, etc., but that technology will probably not be available when the first autonomous cars hit the road.)
So, all current experimental autonomous vehicles being tested on the road use lidar sensors, in addition to video cameras and radar. (Mobileye is experimenting with a camera-based system in Jerusalem but....will be adding lidar to the system soon.) Lidar sensors use near infrared light and advanced optics. And while they can be built in various forms, ranging from bulky designs to smaller MEMS-based models, most today look like the spinning-top Velodyne models.
Can you do without lidar?
There is one workaround to the problem of insufficient perception technology: Use lidar-based maps downloaded to the car for specific roads. That's the approach that GM's Super Cruise takes. It is essentially an autonomous driving system for specific highways--with a major proviso. Drivers have to maintain full awareness and be ready to take control instantly. Unfortunately, using so-called high-resolution map data doesn't solve the live perception and classification problem autonomous cars face. So a built-in lidar map may be great for keeping the vehicle centered in the lane but completely useless for avoiding, say, merging traffic, a mattress lying in the road or an accident scene ahead when you're traveling at 75 mph. For those situations you need more advanced sensors like lidar used in combination with CRU systems.
Currently, the radar and video combination is not safe for autonomous driving, as Tesla's current version of Autopilot demonstrates. And engineers and researchers are fully aware of this fact, which is why they all plan to include lidar (and possibly other more exotic sensors) in their AVs.
Could it be done without adding hardware and instead using so-called artificial intelligence programming? In a word, no. There several reasons for that, but two are particularly noteworthy. The first is that no one is actually using any AI in any autonomous driving programming. There are lots of nifty algorithms being used and a lot of impressive machine learning, but no actual artificial intelligence is being deployed.
The second problem is that current research in AI focuses on neural networks and so-called deep learning. It looked like a promising area of research in the '80s, supposedly modeled on the way the brain works. But it turned out that despite the name, neural nets failed to emulate the way humans learn or think. Nevertheless, that doesn't mean neural networks are not incredibly useful in all sorts of applications. But not all.
For autonomous vehicles, AI programs cannot merely be as good as humans drivers, they have to be better than human drivers. So in order to properly drive a car will AI programs have to conceptualize and make decisions in the same way humans do? It's not logically entailed, but it may be necessary.
Another problematic aspect is that most AI approaches are based on Bayesian probability theory. Such an approach is great when you want to use inferences to get close to an answer for what seemed like an intractable problem, like international AirBnB bookings. But it's a terrible approach when you need absolute certainty so you don't get killed by obstruction the system has never seen before. (This is why you hear so much about "edge cases" from automotive researchers.)
The truth is that to make the autonomous and semi-autonomous systems function, humans do the classification work by sitting at screens and labeling objects they see in videos--pedestrian!, bicycle!, stop sign! Computers simply cannot do this on their own. The machine learning bases its assessments--animal, mineral, or vegetable--on those initial classifications. In other words, the machine learning is only as good as the human labeling.
The classification process is labor intensive and by its very nature ultimately imprecise--does the labeler know the difference between an ice cream truck and a FedEx truck? (Such knowledge is necessary for driving decisions where the car has to anticipate, say, children running out from the side of the road.) It's also endless. Mobileye, which has the most experience in this area, has been working on it for about a dozen years and still cannot with complete certainty pick out a deer prancing along the road in all cases. That's how difficult the challenge of autonomous driving is.
Complete certainty is important. Most automakers have been extremely cautious in how they present their semi-autonomous features to drivers. And only one so far--Tesla--has had fatal accidents when drivers were using those systems. (Uber is not an automaker, and that's another story.) This is not to say it can't or won't happen with a comparable Volvo or GM system, but the fact that there have already been Autopilot fatalities should be sounding more alarm bells.
The bottom line is that in order to safely implement an autonomous driving mode for cars, you're going to need lidar (and perhaps even more sensors not discussed here). Video with radar and ultrasonics isn't going to cut it. And that's all Tesla is using in its current vehicles. No software update is going to solve that problem. Perhaps next year, the Model S, Model X and Model 3 will have a lidar option, but until they do, don't count on Tesla going fully autonomous any time soon.