Last year a Tesla car smashed under a truck killing the driver (that was not actually driving, so at the time he was more like a passenger). Last week another Tesla bumped a parked truck, this time with no casualties.
How could it be that a car so sophisticated to predict the behaviour of other cars, measure their speed and computing their trajectories cannot simply brake or take a diversion when there is a truck (which is big!) in its path?
We do not have commercially available fully automated self driving cars. What we can buy are cars that can perform some autonomous driving, like self parking, changing lanes, keep moving at a constant speed within a marked lane and so on. Manufacturers like Tesla selling semi autonomous cars are very keen in telling the future drivers that they have to pay attention to the road, even when the car is under command. They also have developed several technologies to get the attention of the person in the driver’s seat, like vibrating the steering wheel, voicing a loud message to keep the hands on the steering wheel if the person forget to do so, they even go as far as doing some little braking that creates bumping sensation. Actually, all of that was done by the Tesla car that eventually smashed under the track killing its passenger.
Why then, one should rightfully ask, didn’t the car simply detected the truck and stopped. The problem is that in that situation the car did not recognised there was a truck in its path.
Semi autonomous cars are equipped with cameras and radar technology able to detect movements. They have a software that is trying to make sense of what is detected but that software is facing difficult decisions. There may be a number of detected obstacles, from the loose hubcups and overhead highway signs, that actually are no obstacles at all, even though they are detected as being in the path ahead.
Imagine you are driving on a highway that is overpassing a road. Your car will go up and the radar system may intercept an overhead sign that is on the descending part of the road but because you are on the ascending part it would look exactly on the path of the car. Of course once you reach the top of the overpass the road will descend and your car will safely go under the overhead signal, hence there is no reason to brake the car. The problem is that this kind of reasoning, that leads to the conclusion that everything is fine, is extremely complex for a computer. Knowing this and the fact that braking on a highway is not a good idea (it can actually be as dangerous as braking when there is an obstacle) the software has been designed not to take into account static obstacle, under the assumption there there should not be a static obstacle on the path of the car. This assumption is sometime false, hence the reason why the car manufacturer of semi autonomous vehicles insists that you must stay aware of what is going on and take diversive action if required.
Does this mean that we will not ever have a fully autonomous self driving car? No, it just means that today the technology and its cost is not ready. To get a better sense of what is laying ahead you would need to use LIDARs in addition to Radars but they are too much expensive. In addition you would need to have much more sophisticated Artificial Intelligence on board, able to create an accurate image of the environment and understand it. In other words your car needs to become aware.
Now, there is no question that it eventually will become aware (and a few prototypes already are, but their cost is beyond affordability).
For the time being we have to live with semi automatic cars and the fact that they are “stupid”, well, it should boost our self esteem! We are (at least for a little way) smarter than cars.
On the other hand we get often distracted, we may drink a bit too much, we get tired… All in all, already today, semi automatic cars are way safer than human drivers!