One of the problem we are facing when talking with one another is to know what the other person “knows” (or perceive). It pops up once in a while when it is clear that there is a misunderstanding, most of the time this misunderstanding does not exist (or, more likely, we don’t perceive it).
It is usually a very subtle issue that has no practical impact. An example is what I or you perceive as “green”. Is my green the same of yours? Most likely it is not. Sure, in general “green” is “green” to everybody but once you start considering the various hues our visual perception differs, sometimes I claim “it is green” whilst my wife claims “it is greyish”….
The situation becomes much more challenging when we interact with a machine. How do we know what a machine perceives of the environment? What is an autonomous car really seeing and what does it mean (to the car) in terms of information that is used to take decisions?
This is where the Tesla announcement of a new user interface (UI) fits in (see image).
Interestingly this interface is being called “mind of a car” by Elon Musk. The goal of the interface is to render the perception (can we call it “perception”?) or awareness/understanding of the car resulting from its processing of data received by its various sensors into a graphical representation that “we” can understand. Basically, the streams of digits and their timing that would be meaningless to us are converted into an image that our brain can process.
This is interesting since it tackle a specific problem with autonomous (and semi-autonomous) vehicle, keeping the driver in the loop so that she can take control if needed and it is interesting because it addresses a much more general issue: connecting our intelligence to artificial intelligence. This is an issue that will require a lot of attention as AI becomes more and more pervasive and it is going to take decisions based on data that we are unable to process (because of volume and time constraints) and on reasoning that differs from our way of reasoning.