Home / Blog / Is AI “sentient”?

Is AI “sentient”?

Graphic representation of a decision graph used by a chatbot. Image credit: Eshan William, GamingDeputy

Since a Google engineer made the newspapers headlines by suggesting that an AI powered chatbot has become “sentient” I have received a number of mails and calls to get my opinion on the matter. HEnce I decided to share it broadly through this post, also considering that in our Digital Reality Initiative we are addressing several aspects of AI and, in particular, the one of an emerging AI resulting from massively distributed and loosely connected AI entities.

First of all let me state what I say first when I am asked this sort of question: “I am no expert, you would be better off asking someone else!”. So why should I go on giving an answer? Because I feel I can use some “educated” common sense that can lead to further thinking. Hence, do not expect an “answer”, only some thought that may help you in searching for an answer.

The claim that made the news was about a chatbot, i.e. a software designed to entertain a conversation in natural language with a human, reaching such a level of understanding to be considered “sentient”. Now before you go on reading take a minute to watch the clip.  There are two chatbots (rendered in shape of human figures) talking one another. Those chatbots have been created using GPT-3 (courtesy of Google, and available for free to anybody wishing to create AI). Today we have more powerful tool leading to even better AI, like Gopher, GLaM, … and more are coming.

How do you feel after watching that? Sure, it felt a bit artificial in some points, but it also felt credible in others… at least to me.

Now, back to the question.

How do I know you are sentient? Well, let’s say that I take it for granted, you are a human being, therefore you are sentient. But suppose you are facing a person in a coma. You would say he is not sentient at the time. When he opens his eyes you are looking for sign that he is back, you try to ask questions and you evaluate the responses. If there are none you may doubt that he is sentient at that time.
The point I am making is that the way we gauge being “sentient” is through observing the reactions to our questions/interactions. In the end this is not that different from the Turing test where you are asked to tell a human from a computer on the bases of the interactions you engage with it/him.

We have had plenty of debates on what the Turing test really means, particularly as computers where getting closer and closer to pass it (and now the agreement is that they have passed it). Many observers noticed that:

yes you can no longer tell a computer from a human based solely on interactions but that does not make it a human, nor can you assume any sort of human feelings, even if it is programmed to show consistent emotions”

I guess in this discussion there are three main characteristics we associate to a living entity, in particular to a human:

  • awareness
  • self-awareness (consciousness)
  • sentient

There is no doubt that being aware is a fundamental aspect of being alive. We can find awareness in every sort of life, from bacteria to multi-cellular beings. Of course there are many degrees of awareness. If you touch a hot stove your awareness first appears at a totally unconscious level. You immediately remove your hand from the stove, thanks to a decision being taken at periphery level, before your brain becomes aware of the heat. After a little while (some hundreds milliseconds) your brain gets the information of a hot stove, you become conscious of it and you consciously act upon it (but your hand already noved away from the stove!). You see there are level of awareness even in this very simple case…

Likewise, we can see levels of self-awareness. Get one drink too many and your self-awareness is bound to go down a notch… Are animals self aware? Many studies respond in the affirmative, showing different levels of consciousness in different animals. In some way it correlates with the brain size (and most likely to some specific brain structures that come along with bigger brain). Many species are able to recognise themselves in a mirror, a common test to investigate on self-awareness.

Sentient refers to the depth of awareness an individual (an entity?) possesses about himself or others. So in a way it is connected to awareness and self-awareness. And, again, it can come in several hues.

If I look at machines, and AI; there is no question that over the years we have been able to infuse them with the capability to harvest data from their environment, we have embedded sensors and provided software that analyses the data collected and that can steer the machine/software reactions. Over time this sensing and analytics capabilities grew, to the point that today a machine may get more data through its “senses” than the ones we can get through our senses, and we have kept growing in the capability to analyse those data and take decisions. True, for a long time these analyses and the following decisions have been the result of a mechanical process “if that  > do this”. Nowadays, however, it is becoming more and more common to have the software learning by itself what is going on and how to react, not very differently to the learning process of a toddler.

Does this mean that those machines/software are starting to have feelings? Surely not, one would say, but then again, what are feelings and most importantly how can we tell they are there? Only through observation (change in skin colours is surely a component, but with emotional computing and smart materials it won’t be difficult to create rosy cheeks on a robot face, as needed!).

So, my point is that between being sentient and not being sentient there is no sharp dividing line, rather there are a lot of greys and a very broad fuzzy area. What we are seeing is that AI is in this grey area and it seems to progress in the direction of what we have been used to define as sentient more and more. Whether there is a threshold somewhere, and if  it has already passed it, it is not for me to say.

I guess a more interesting question would be if (and when) we will perceive a machine/AI as a sentient being ….and of course look into the ethical and societal implications arising. Here again I think there will not be a day when this will happen. It will happen over many years and we will realise that only when looking back.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.