Home / Blog / Self, selves and emerging self -III

Self, selves and emerging self -III

The Chinese Room argument proves that by just looking at interactions with a brain we cannot determine if that brain really thinks/feels. Image credit: Stanford University

Digital twins – The computational brain hypotheses

We are used to computers built with chips in silicon. However, we also had computers made of vacuum tubes, others made with relays, even mechanical computers using levers and gears. Those were in the past, today we have molecular computers and on the horizon quantum computers and spintronics computers. Each of these categories of computers are made using quite different components and yet they are all computers. Hence, a computer does not require a specific technology to be a computer. Being a computer only requires being able to process data.

Why this preamble? Because there is an hypotheses (yet to be proved but so far it has not been proved false) that a brain does not need neurones to be a “brain”. Other “technologies” might be used to create a brain. This is the “Computational Theory of Mind“, CTM, that, in short, affirms that the brain is a sort of computer and that the mind is the result of computation.

If we accept this hypotheses then we can claim that once we will be able to understand the kind of computation carried out in a brain we could replicate that computation in an “artificial brain” no longer using neurones for its computation. This artificial brain would be our Digital Twin. IF CTM is right our Digital Twin would be able to think, feel and perceive itself, i.e. feel its “self” and actually parts of it each will perceive their own self and all together they will concur in creating the self perceived by the whole digital twin!

BCI, Brain Computer Interfaces, might, in theory and sometime in the future, be able to extract data from a brain and replicate them. A software simulation (taking for granted that sufficient processing power will become available in the next decades) could replicate the processing taking place in the brain and create the same kind of output. Now, what is the output of the brain?

Basically we have three kinds of output (a gross generalisation):

  1. signals to the periphery to execute some actions, like moving a step forward, picking up something, looking in a different direction, voicing a word…
  2. emergence of perception so that we become aware of the result of the “processing”. This does not happen for each processing, actually it happens in a minority of situations since most processing goes on below our awareness layer. This emergence can take shape of a thought, of a feeling. Thoughts are usually articulated by an inner voice, we talk to ourselves (interesting for those who are bi-lingual to notice in which language a thought is articulated) but they may also result in (virtual) images. They do not result, in normal brains, in smell sensations, these are only felt when we actually smell something, so if in a dream you really smell smoke, better wake up since there is smoke!
  3. change of the inner structure of the brain, reinforcing some synapses, depressing a few neurones and so on, what we usually call “memory” that actually comprises the usual memories (of a face, of a place, of a math theorem…) and the implicit learning (like riding a bike, playing a piano…).

Now, all of these actions can in principle be achieved by a simulator in the cyberspace, if the computational theory of mind is correct. We already have robots whose actions are directed by a program, and we can have a self modifying program making sure that the following processing takes into account what happened before (i.e. memory in a brain sense, not in a computer sense).

The area of thinking/feeling is trickier. Whilst for the other two there is consensus that it is doable (actually it is done) when we come to feeling and thinking the positions diverge. Clearly accepting CTM would imply that one has to accept the emergence of thoughts and feelings but that is usually the point when some people say “Yes but”.

Could our Digital Twin think (and feel)? Can we test this hypotheses?

The test for having an intelligence au pair with our is the Turing test. If I am sitting in a room and communicating with a third party that could be a person or a computer and after extensive interactions I am not able to identify who is who than we say that the computer has passed the Turing test and has an intelligence equivalent to ours.

However this evaluation at the edges is not completely satisfactory when we want to test for thinking, feeling and awareness, as the Chinese Room argument demonstrate.

Imagine to be seated in a room and someone outside of the room send you a question in Chinese. You don’t know Chinese but you have the possibility to use a computer that can look at a question in Chinese and provide a good answer in Chinese (that computer has passed the Turin test). You could even be that computer yourself provided you have the possibility of matching questions strings with the appropriate answer (in this thought experiment it does not matter that you will have to spend a lot of time searching for a matching in the question and picking up the answer attached). The point is that by manipulating symbols you can provide an answer that would make sense to the person interrogating you. After some interactions the person questioning you will come to the conclusion that you understand Chinese, whilst, in reality, you haven’t the foggiest idea of what was the question, nor the answer.

This thought experiment demonstrate that working at the edges of the brain (of the digital twin) does not answer the issue of thinking nor feeling (a computer can be programmed, affective computing, to show feeling and to show that it has empathy).

Notice that today we cannot replicate in bits a brain in its complexity but there is nothing, at least so it seems today, that could make this a physical impossibility (it may be impractical, beyond today’s technological capability -yes it is, undesirable -because it might require killing the physical brain…but in principle it might be feasible).

There are a few scientists, however, that claim that the brain processing as a quantum nature, quantum brain hypotheses. In this case it may be physically impossible to actually replicate a brain since any attempt to read the brain for creating an equivalent digital twin will actually alter the brain itself leading to a biased result (quantum uncertainty). If this were the case a full Digital Twin of a brain can be impossible and the subset of the brain representable by a Digital Twin may miss what is needed to think, feel and be aware.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.