I have touched upon seeing, hearing and touching applied to human to whatever interface. There are three more, smell, taste and proprioceptors (these latter provide a sense of position and acceleration) but I’ll skip these for now since they are not used, nor are likely to be used in isolation as interface, rather they may become part of a multisensorial, multimodal interface and therefore I will consider them in that context.
So, having said that let’ s now consider interfaces that do not engage our senses. There is a lot of work, and hope, to be able to interface directly our brain to the world, beginning with a connection to a computer. The Symbiotic Autonomous Systems Initiative, now merged inside the Digital Reality Initiative, has addressed a 30 years horizon and in this time frame we can expect that the ongoing research will deliver. I am still skeptic on the possibility of fulfilling the dream of a seamless Brain to Computer Interface -BCI-, but if we scale down our ambition and accept also non-seamless interfaces then I think we will be seeing them in the coming decades. Note that we already hear claims of BCI that allows a paraplegic person to communicate with an exoskeleton and walk again or a person that can control a robotic arm with her thoughts to drink from a glass, but these news are twisting reality.
Notice: these results are amazing but are not about an interface providing signals to a computer that can then decode them and “read the person’s mind”. We need to understand this clearly so that we can gauge the progress in BCI and the road ahead. What is happening is that electrical signals generated by the brain are captured by electrodes (more of this below) and are sent to a computer. The computer generates a (usually) visual rendering of these signals and the person is trained to think in such a way that eventually the computer will understand his intention. The learning is on the humans (most). The adoption of machine learning and better signal processing is now decreasing the time the human needs to learn how to interact with the computer. Also notice that there is a very strong tie between that person and the computer. The same computer that understood the person intention will be at loss in trying to understand another person. The reason is clear. Most of the understanding is on the human side!
The evolution in BCI is happening in several directions:
- less invasive physical interface and better electrodes to pick up the signals
- extending the brain area where signals are captured (see image)
- decreasing the training time for the person and improving computer sensitivity (intelligence)
- reverse communications, from the computer to the brain (still more in the science fiction domain than in the one of science).
In the following posts I am going to address each of these evolutions.