Home / Blog / Why, Why, Why? Yet, AI does not answer

Why, Why, Why? Yet, AI does not answer

An illustrated example of a convoluted neural network used in a self driving car to identify an object. The actual process of identification is lost in the many steps and it would be difficult for the car to explain “why” it has taken a certain decision, even more difficult it will be for us to understand the explanation. Image credit: Karpathy, Stanford University

There is something in our human nature that is pressing us to ask “why”. And that stems from the assumption, belief, that there is always a reason why. Once we acquire the idea of cause-effect, normally once we get 3 years old, we will never abandon it. Young kids keep asking why, as adult we may not ask why as often as we did when we where kid but we know by heart that there should be an answer (and if there isn’t one available we are willing to fabricate one, like invoking the supernatural…).

We have seen in these last decades amazing progress in autonomous systems, guided by an ever smarter artificial intelligence. We have witness the GO World Champion being defeated by AlphaGo playing some unexpected moves. Yet there was no possibility to ask AlphaGo: “Why did you play that move?”. Similarly, if we were seated on a self driving car and all of a sudden that car veered to the left we wouldn’t be able to ask the car: “why did you veer left, rather than breaking?”

The fact is that AI has not been programmed to answer “why“. More than that. The processes followed by AI take into account thousands, sometimes million of possibilities and an explanation of those processes would be far too complex for us to understand. Notice that our brain is also taking into account millions, may be billions, of signals/states, generated by its neurones but at a conscious level we only perceive a very limited number and what we perceive is actually what makes us answer the “why”. We are basically disregarding the low level machinery and only focus on the high level semantics (which sometimes is misleading…).

Looking into the “why” is intriguing, it goes back to the Leibniz “calculemus” to the idea that there is a well defined process that starting from a limited set of assumptions leads to a unique conclusion. That process plus the original assumptions/data is the “why”.  This was also a starting point in the development of artificial intelligence: finding a process (the one created intelligence) and applying it to the solution of complex situations. So far it failed. Actually Artificial Intelligence stumbled onto a roadblock and did not advanced any further with this approach (we got expert systems in the 90ies, some very good one, but very specific in their capabilities).

In these last years the advent of a different approach based on self learning has opened up a new world and we have seen tremendous progress in artificial intelligence. It is not -yet- an Artificial General Intelligence, AGI, but it is surely going beyond narrow field artificial intelligence.

Sure, we have plenty of AI applications that are very “narrow”, like the AI used in a digital camera to find smiling faces, or the one in a smart tripod to track an object or in a self driving car to become aware of potential obstacles…. But the very way we are developing AI today through self learning (using convoluted neural networks, deep learning…) is taking us into unexplored paths that cannot answer our “why”.

You define a certain frame, the initial conditions, for AI to develop but then it is on its own, and the path it takes, the kind of reasoning it develops may be beyond our grasp.

This is both exciting and scaring. It is actually not too different to what happens in the education of human beings. You teach a person but you have no guarantee that the processing of your teaching will result in a person that will process facts as you do. As a matter of fact, we have initiated a creation process where the created entities may surprise their creators.

Symbiotic Autonomous Systems may take these issues a step further (or may be they are just another facet of the same issues, being a “super system”). In a symbiotic autonomous systems you have two -or more- interacting intelligences giving rise to a new emerging intelligence. How can we get an answer to our “Why?” from this emerging intelligence? Notice that attached to this there are huge ethical, social as well as accountability aspects.

More thoughts on this in the future…

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.