There is something in our human nature that is pressing us to ask “why”. And that stems from the assumption, belief, that there is always a reason why. Once we acquire the idea of cause-effect, normally once we get 3 years old, we will never abandon it. Young kids keep asking why, as adult we may not ask why as often as we did when we where kid but we know by heart that there should be an answer (and if there isn’t one available we are willing to fabricate one, like invoking the supernatural…).
We have seen in these last decades amazing progress in autonomous systems, guided by an ever smarter artificial intelligence. We have witness the GO World Champion being defeated by AlphaGo playing some unexpected moves. Yet there was no possibility to ask AlphaGo: “Why did you play that move?”. Similarly, if we were seated on a self driving car and all of a sudden that car veered to the left we wouldn’t be able to ask the car: “why did you veer left, rather than breaking?”
The fact is that AI has not been programmed to answer “why“. More than that. The processes followed by AI take into account thousands, sometimes million of possibilities and an explanation of those processes would be far too complex for us to understand. Notice that our brain is also taking into account millions, may be billions, of signals/states, generated by its neurones but at a conscious level we only perceive a very limited number and what we perceive is actually what makes us answer the “why”. We are basically disregarding the low level machinery and only focus on the high level semantics (which sometimes is misleading…).
Looking into the “why” is intriguing, it goes back to the Leibniz “calculemus” to the idea that there is a well defined process that starting from a limited set of assumptions leads to a unique conclusion. That process plus the original assumptions/data is the “why”. This was also a starting point in the development of artificial intelligence: finding a process (the one created intelligence) and applying it to the solution of complex situations. So far it failed. Actually Artificial Intelligence stumbled onto a roadblock and did not advanced any further with this approach (we got expert systems in the 90ies, some very good one, but very specific in their capabilities).
In these last years the advent of a different approach based on self learning has opened up a new world and we have seen tremendous progress in artificial intelligence. It is not -yet- an Artificial General Intelligence, AGI, but it is surely going beyond narrow field artificial intelligence.
Sure, we have plenty of AI applications that are very “narrow”, like the AI used in a digital camera to find smiling faces, or the one in a smart tripod to track an object or in a self driving car to become aware of potential obstacles…. But the very way we are developing AI today through self learning (using convoluted neural networks, deep learning…) is taking us into unexplored paths that cannot answer our “why”.
You define a certain frame, the initial conditions, for AI to develop but then it is on its own, and the path it takes, the kind of reasoning it develops may be beyond our grasp.
This is both exciting and scaring. It is actually not too different to what happens in the education of human beings. You teach a person but you have no guarantee that the processing of your teaching will result in a person that will process facts as you do. As a matter of fact, we have initiated a creation process where the created entities may surprise their creators.
Symbiotic Autonomous Systems may take these issues a step further (or may be they are just another facet of the same issues, being a “super system”). In a symbiotic autonomous systems you have two -or more- interacting intelligences giving rise to a new emerging intelligence. How can we get an answer to our “Why?” from this emerging intelligence? Notice that attached to this there are huge ethical, social as well as accountability aspects.
More thoughts on this in the future…