Human Machine Interaction
Interactionwith a machine has been basically a mechanical activity. I push a lever and a cog is engaged. There is no awareness on the side of the cog, it just finds itself executing what has been forced on it.
When interacting with a computer the situation is not really different. I type in a command and that starts the execution of something. In both cases, there may be controls acting as safeguards: in a car I cannot put the shift in reverse when the car is moving forward, a computer may not accept a command in a certain situation. Over time the “mechanical” sophistication has improved with machines becoming way smarter and executing complex sequences of actions (coordinating different subsystems) and of course computers can execute actions through an amazing level of sophistication, actually exceeding our human capability both is speed and complexity. Yet all of this sophistication does not mean that the machine or the computer are intelligent.
Humans interacting among themselves may sometimes work like machines, i.e. mechanically respond to a command, but in most situations they show an intelligent behaviour, they do not just execute a command but they understand a command, evaluate its implication and many times this leads to a negotiation (why don’t we do this other way?). Sociologists/psychologists are recognizing a social and emotional intelligence that is a fundamental component of our human interactions.
In this last decade we have heard more and more of intelligent human machine communications of machine brains interacting with the human brain. Maurizio Reggiani, CTO of Lamborghini, describes the new Driving Assistant System of the Huracán EVO by saying:
“We’re now able to synchronize the brain of the car with the brain of the driver”
Notice that he is not claiming the system is “intelligent” but the sentence is suggestive of an intelligence in the car making it possible to access the driver’s intention.
I am taking this as another type of hype surrounding AI, different from the previous I mentioned where often people were claiming to use AI where actually only some (sophisticated by plain vanilla) algorithms were involved. Here again we are looking, in reality, to some very sophisticated algorithms but there is a different hues: the capability of talking directly with the human brain leveraging on AI.
The hype here is about expectation, what AI can -today- do. This kind of hype is also generating concerns on the super-human capabilities of AI, including reading our mind.
Brain-computer interfaces have made significant progress in these last ten years but we are nowhere near the reading of thoughts.
Artificial Intelligence has started to be used as a tool to help understand the huge and diverse data created by sensors intercepting the brain electrical activity. When this activity is well defined (in a specific area of the brain, like the motor cortex) signal processing can pick up the meaning, like active muscles to move a hand. In this area, signal processing, machine learning can help since it can “learn” how a specific person’s brain creates certain electrical patterns.
The FDC Brain Initiative is addressing this area and it is also starting to consider what kind of problems, ethical issues, may arise once these technologies may progress into thoughts detection. Let me repeat it once more: we are not there yet and we have no idea if and when it will be possible to decode a thought as easy as “I like the colour green”.