I just read a nice article, not brand new, it was published in January 2019, as I was looking for image recognition advances. The article, written by Melanie Mitchell an external professor at the Santa Fe Institute with which I had the pleasure of interact long time ago (with the Institute, not with Melanie), is addressing the problem of taking decision in presence of an obstacle on the road. What is interesting is to notice how something that is so straightforward to us is very difficult to tackle by a self driving car (by its silicon/software brain).
Just imagine yourself at the wheel seeing a stray dog in the middle of the road. No brainer, you would slow down to give time to the dog to move out of the way. Were you seeing a flock of pigeons you would keep going, knowing that they will fly away, just in time to avoid your car. A piece of paper, a tumbleweed… just drive on, literally drive on them, your car wheels wouldn’t even notice them. A small pile of snow? Well if you have a SUV you would probably have fun driving on it but if that pile was in the shape of a snowman, probably built by some children you would drive around it…
All of this comes natural, to the point you are not even taking a decision, you just “know” what you have to do.
Not so for a self-driving car, because it is missing what we possess: common sense.
There have been over the years several attempts to teach common sense to machines. Back in 1999 Marvin Minsky started the Open Mind Common Sense project at MIT, Open because it leveraged on contributions from people all over the world to provide common sense knowledge, like you can pull a rope but you cannot push it.
Since its start it accumulated over a million (!) of English facts provided by over 15,000 contributors. The project did not succeed in teaching common sense to machines but it surely pointed out how much complex common sense is.
If you consider the amount of effort put into finding ways to endow machines with something as easy as “common sense” and the fact that we have not succeeded so far you’ll understand Frank Lloyd Wright quote ” there is nothing so uncommon than common sense”.
Self-driving cars have to take decision based on what their sensors are detecting, an obstacle on the road. Should I brake or not? Unless the car understands what kind of obstacle is there the decision will be taken to be on the safe side, meaning a self driving car will end up braking even when no braking would be needed. This is not just a bit annoying for passengers (and a tiny waste of energy) it may also pose a risk to cars following it if these have a human driver that seeing the obstacle would not expect the car before him to brake.
It is these sorts of issues that researchers are struggling with in trying to push cars up to level 5, for a fully autonomous driving. In a way creating a fully autonomous aircraft is way easier since it will operate in a controlled environment where machines already have the upper hand (given that in most situations the reaction time is too short for a human pilot to take charge). Self driving cars have to operate in a human environment, with unpredictable humans all around them and acting on a common sense that is not part of a car’s brain.