Machine’s Self, machine’s awareness
Could a machine have a self, perceive its own identity and be aware of it? (with machine I am including the AI part).
The question has been debated for quite a long time and it is still without an answer that can be accepted by all. I am not trying to answer it, surely I am not qualified, rather to put the question in the SAS perspective.
In the first post of this series I pointed out that humans have a sense of self in terms of their (our) body and its position in space. We know we have a hand, we know how to use it and where it is. An autonomous robot has to have the same knowledge. It needs to know it has arms and hands (not limited to two!), how to use them and where they are.
As we are aware of our surrounding, an autonomous system needs to be aware of its surrounding and understand it. A self driving car needs to be aware of its surrounding, distinguishing a tree (unlikely to move around) from a dog (that might jump off the sidewalk). It also need to be self aware of its capability of accelerating, change direction and brake. In this respect there is not much difference from us being aware and an autonomous systems being aware.
It gets more difficult to answer the question if a machine is aware of being aware! Could a machine brood on the sense of its existence and come out with “I think, therefore I am”?
According to the computational brain hypotheses a machine (in the future) might have a broad awareness, may reflect about itself and may have feelings. This is in the future, however, and we don’t know what actually would change a sophisticated, intelligent, software into a sentient software that in principle might not require a level of complexity as high as the one mimicking a human brain.
Machines can surely operate based on goals (like AlphaGo) and have software that through independent learning can generate behaviours that are “unexpected” by its programmers. A machine can also be programmed to look like it has feelings, and it can even be mistaken by a human being (to a limited extent so far but they are getting better and better).
In the SAS context machine awareness is a requisite to have them becoming fully autonomous and technology evolution is clearly going to improve and extend machine awareness. However, being “conscious” is not necessarily a goal. The IIT, Integrated Information Theory of consciousness has developed a metric (phi) to characterise various levels of consciousness, applicable to humans, other animals and machines. The theory posits that the all information processing should be integrated (in the sense of every part conditioning every other part, via forward and backward loops) and as such it requires a certain thresholds of complexity (yet to be determined) below which no consciousness arises.
Another way to look at this is: would a machine aware of being aware behave in a different way, i.e. would it make any difference to us?
Probably from an ethical and empathic point of view it would make a difference. However, we have to consider that we respect life, even when we feel there is no awareness in the broad sense (consciousness). Many insects, as an example, do not seem to be aware in a broad sense (they are obviously aware of potential dangers, of their environment) and yet people tend to respect butterflies and lady bugs (less so with mosquitos, though). We have some robots that seems to have feelings (affective computing) and we have seen people reacting to their presence, and interaction, as if they were sentient beings.