Home / Blog / Megatrends for this decade XXV

Megatrends for this decade XXV

An interesting roadmap for the coming two decades on the evolution of brain computer interfaces. Notice the categorisation of interfaces in wearable, semi-implantable and implantable and the expected evolution according to the goals: replace, restore, enhance, improve and research. Image credit: BNCI Horizon 2020

17. Brain Computer Interfaces

The connection of our brain to a computer was in a way imprinted in the name that many gave to the earlier ideas of computers: “Electronic Brains”. If they are both “brains” it makes sense to look for a connection among them: easier said than done. An “electronic brain” is a fixed thing with some hooks you can use as Input / Output gateways. That is what we use to connect a keyboard, a printer as well as other “electronic brains”. Software will then take care of making out sense from the signals received through the gateway and will make sure that the signals sent out may be understood by the receiving party.

An example of computer to brain connection via nerves. Argus II implant on a “blind” retina stimulates the optical nerve allowing the brain to perceive the light that hits the sensors on the implant.Image credit: University of Michigan/ Second Sight Medical Products

Our brain connects, as well, to a variety of peripherals through “nerves” and one approach would be to establish a connection with these nerves as entry/exit points. However, although this is done in some cases (like the artificial retina – see photo on the side- or the artificial ear in the computer to brain direction and in the case of some limb/hand prosthetics in the brain to computer direction) BCI aims at a direct connection from the brain to a computer and viceversa.

This is quite tricky since there is no connector “inside the brain” and one needs to monitor the brain activity, in terms of neuronal activation, to extract a signal (in the brain to computer direction) and influence the activation of neurones in the computer to brain direction.  There is no single neurone that can be associated to a signal nor a single neurone that can be activated to influence the brain in a given way. More than that: the variety of neurones whose activities defines a “signal” (or that have to be triggered for conveying a signal) may be located in different parts of the brain and just to add complexity to complexity the set of neurones involved in an activity may (and do) change over time.

Hence, a connectivity like the one established between a computer and its peripherals is simply not possible. To get a signal from the brain we have to “look” at its global activity and “guess” what is going on. It is a bit like  a car than to get a signal of the intention to turn from the driver, rather than being connected to the driver intention through the steering wheel, would have to rely on observing how the driver moves his eyes, some tell-tale signs from the mimic of his face and interpreting his voice message (assuming that he is saying something relevant at that particular point). Wouldn’t that be a desperate endeavour?

Micro LEDs were used to optically detect seizures. The flexible optical device is shown on the left with a green-emitting LED. The photo on the right shows the device performing seizure detection on a rat’s brain. Image credit: FindLight

Yet, this is exactly what researchers have been doing to extract a signal from the brain to be used by a computer. The really amazing thing is that they are succeeding!

The brain’s electrical activity can be captured through sensors placed on the skull (non invasive sensing) or placed on the cortex or in the brain (in both cases surgery is needed). Recent technological advances are  enabling the placement of multiple sensing points and the transmission of the detected activity using a wireless connection. Obviously, the closer the sensor is to the area generating an electrical activity, the less noise in the signal and the more precise the measurement. However, no-one would be looking forward to an invasive procedure so this is done only when strong medical reasons are present, like in the case shown in the figure of a patient suffering from epileptic attacks.  The detection with higher precision and sensitivity the insurgence of a condition that would lead to an epileptic attack allows the establishment of counteractions that avoid the attack.

Pursuing the goal of tackling disabilities (not just epilepsy, also Parkinson, dementia…) researchers are perfecting existing technologies and exploring /creating new ones. The convergence of material science (to create better sensors –graphene seems a good candidate, tinier and bio-compatible with multi sensing capabilities), of artificial intelligence, in particular machine learning for better signal processing and even of robotics (for accurate placement of sensors in the brain) is expected to lead to significant progress in this decade, supporting the claim of this Megatrend for much better BCI. Progress in signal processing and machine learning will compensate for the lower signal precision provided by non-invasive BCI and this will expand the trials (today, as noted, limited to person having very strong medical reasons that require brain surgery).

The way to go is still very long and full of obstacles: when we hear today of computers (robots) that can be controlled by the “mind” of an operator (like a paralysed person that using a BCI can control a robotic arm or a wheel-chair -watch the clip) we are implicitly led to believe that the computer can read a person’s mind. This is not the case. What really happens is that that person has been training his brain to generate a specific electrical activity that can be interpreted by a computer and thus result in a specific action controlled by the computer. True, there are sensors that pick up this electrical activity but it is the person that, through training, learns to generate that electrical activity. On the computer side signal processing and machine learning, by no mean feat, identify that specific electrical activity among many others that are running in parallel and uses that as the input to start an action. In a way it is more the human brain learning how to control a computer than the computer learning what the human brain is thinking!

According to this Megatrend, in this decade there is the possibility of having a computer starting to understand some basic “intention” of a brain through machine learning and training (of the computer), But this is nowhere near having a computer that can read our mind!

There is also another big hurdle to overcome. Each brain is unique in terms of electrical activity (to the point that this can be used to identify a person, a digital signature of that person) and even more, this changes over time. Hence, a computer that can understand a person intention to move a wheelchair, will not be able to understand a different person having the same intention. BCI are, and will remain for the foreseeable future person specific. You can move the software from a computer to another so that a new computer can interface with that person but you cannot have that computer understand a different person (unless you start from scratch the training on the new person).

It will remain easier for a computer to read our “mind” in terms of mood and feelings, by looking at our face, analysing our voice and our speech then by analysing the electrical activity of the brain.

Schematics of a Deep Brain Stimulation system using implanted electrodes controlled by a computer implanted under skin. Image credit: National Institute of Neurological Disorders

So far I just considered the brain to computer interface. The reason is why if this direction is difficult the other is close to impossible given today’s knowledge and technology.

The computer to brain communications today results in a coarse induction of some physiological response, such as the already mentioned blocking of an epileptic attack. Here an electrical current is sent to some areas of the brain where anomalous electrical activity has been detected and this artificial current overwhelms the ones generated by the brain leading to a reset (it is not that different from the use of a defibrillator to block anomalous heart electrical currents: the one generated by the defibrillator is stronger and leads to a reset of the hearth own currents). More recently Deep Brain Stimulation (by inserting electrodes in the brain or by focussing beams of wireless energy in a certain spot in the brain) to relieve certain symptoms of neurological disorders.

Technology evolution is now making possible to send signals to single neurones or to group of neurones using optogenetics, but so far this has only been used on animals and the goal is to have a way to study how neurones operate vs a given brain function, i.e. it is not used to transfer information to the brain.

In this decade, by some called the decade of the brain, there is strong expectation to break the code of the brain, i.e. to understand its way of working and that might lead in the future to better cure of neurological disorders. Personally I think that the dream of downloading information on a brain (who hasn’t dreamt to learn by plugging in a flash memory rather than spending long hours studying…) will remain a dream for the foreseeable future.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.