Self driving cars will reach the mass market in the next decade, although we can expect a real take up the the following decades. It will most likely be an electric car, although self driving and electric cars are independent evolutions that will just happen, because of time frame, to become one.
The software, artificial intelligence, making a car able to navigate by itself the variety of roads and situation is basically here today. The big issue is to provide this software with the right data that can make the car environment and situation aware.
This remains a challenge. Google, and many car manufacturers, are betting on the LIDAR technology. The problem with LIDAR (LIght Detection And Ranging – someone says Laser Detection and Ranging…) is the cost. When we say that LIDAR uses a laser we make an understatement. It is actually using 16 or 32 Lasers and Velodyne has announced a 128 laser unit (without mentioning the price). The LIDAR is a mixture of mechanics, electronics and software. The whole package was priced 75,000$ in 2007. Now it is ten times cheaper, around 8,000$, an impressive decrease in cost but still quite expensive if you are targeting the mass market (take into account that in addition to the LIDAR you will have to build the car…). Last year Velodyne set the goal of reaching a target price of 500$ for mass market production of LIDAR shifting to a silicon-software solution (getting rid of the mechanical part), but they did not say when such a target might be reached. Many observers feel that it won’t be before the middle of the next decade. Eventually LIDAR might come down to 100$ but that is not around the corner.
For a nice overview of the evolution of LIDAR read this article.
An affordable LIDAR is seen as the holy grail for self driving cars, hence it is not a surprise that many start ups are working on it and a significant amount of money (measured in billion $) is flowing into them (see the chart). As a matter of fact it is the best technology we have today for feeding accurate data to the software in charge of making sense of the environment.
At the same time I see that progress is being made in computational photography. This area is fuelled by a growing mass market of amateur photographers relying on their smartphones to take photos. Clearly, smartphones manufacturers are pushing the envelope and we are already seeing some amazing result in terms of photo resolution and quality.
I wouldn’t be surprised to see computational photography moving into the self driving car area as a cheaper alternative to Lidar, eventually taking the lion’s share. There are more many companies working on making computational photography better and better and they already have a significant mass market that provides stimuli and feedback. This leads to faster and faster innovation.
Tesla uses 12 ultrasonic sensors around the car, a forward looking camera and a forward radar to feed a machine learning system that gets updated through the learning experiences gained by thousands of other Tesla cars, Basically, they are already used some form of computational photography flanked by other detection streams of data.
Because of the mass market (in the billion of pieces) nature of digital cameras we can get them for peanuts and by using software we can overcome some of their physical constraints (computational photography). It is a complex software that may be quite expensive to develop but then you divide its development cost by the hundreds of millions of installed instances and it comes down to peanuts as well.
Personally, I bet on a mixed detection system (as Tesla is pursuing): in the end, through software, it will be more accurate and more affordable than any single technology.