Home / Blog / The Future of Artificial Intelligence

The Future of Artificial Intelligence

The roadmap for EASA’s certification of AI in aircraft systems. Image credit: EASA

Let’s face the fact: Artificial Intelligence has become a buzzword everybody knows although nobody would be able to really explain what it is….

The root of the world goes back to that famous paper calling to arms researchers for a summer project on artificial intelligence to be held at Dartmoor, UK. That was back in 1955 and the definition of AI was any machine behaviour that would be like the one adopted by a human being in tackling a problem and lead to a solution. In a way it goes back to Turing test where a human could not tell a human from a machine through any number of interactions. Intelligence defined as the observed behaviour.

It seemed as a reasonable “practical” approach. In the end, if we want to have an Intelligent Machine a Machine that to all extent would appear to be intelligent in its behaviour can be considered intelligent. However, as more and more machines started to show prowess in a variety of difficult (measured by us) endeavours, like detecting a cancer by looking at a radiography or creating a new melody many started to object that such machines were just machines good at doing something, not intelligent. A software that can tell you there is a very high probability of cancer invio that radiography appreciate what cancer is, what it would mean to that person such a diagnoses? The software creating a melody will enjoy its creation and keep humming it in its spare time? Obviously not (at least this is what most of us would think). Hence, many claim, this is not intelligence!

This preamble is to clarify that as AI technology progresses we find more and more difficult to define and agree on what artificial intelligence is/has to be. This is becoming important as we look at the future of AI? Should it include awareness, consciousness?

I found a very thought provoking discussion on these topics in an article on What the Near Future of AI could be, In this paper the author avoid the trap of discussing what is AI in order to outline its evolution Rather he focuses on the nature of data used by AI and the nature of problems addressed by AI.

A first evolution, already in the making, on the nature of data is a shift from “big data” to “good data”. In the last two decades significant progress has been made in machine learning: you feed an AI with millions of dogs images and the AI gets better and better to recognise dogs. Notice that it gets better because it learns what a dog is like, not because having many photos of dogs once a new one is inspected it can compare it to a similar one. After a while the AI is able to recognise a new breed of dogs, never encountered before…. In the last few years it was noticed that by using curated (good) data in the training process AI can become better quicker. So the focus has shifted from any kind of raw data, as long as there are many, to a reduced set of good data. The next step, also being experimented now with success, is to use synthetic data for training. These data are created by the AI system itself on the bases of rules. Think about playing chess. Till ten years ago the training was based on thousands and thousands of games played by grand masters and indeed that resulted in an AI that could play chess at the level of masters and even beat the world champion. More recently a different training approach was taken: have the AI learn the rules of chess and then play against itself to generate data (synthetic data). This approach has not just proven to be more effective (AlphaZero became the best chess player in the world in just 9 hours playing 44 million games), it also led to a surprising discovery: the AI was making moves that surprised the Master players, they seemed to be born out of another form of intelligence!

A second evolution, that took place in the last few years, was to use a variation of the above synthetic data generation by having the machine split in two: Generative Adversarial Networks consist of a Generator (creating synthetic data) and a Discriminator checking those synthetic data against a data set of real data. The Discriminator tries to find a discrepancy between synthetic data and its sample of real data and if it can find it rejects the synthetic data. The rejection is a signal to the Generator that the synthetic data are not good enough and therefore the Generator will change its approach and tries with a new set, learning in the process.

All this is good but in some cases, actually most cases if we look at real life, there are no constitutive rules, something that can be used to generate synthetic data (there is no rule for really designing a face, saying you have got to have 2 eyes, one nose, one mouth is not enough to make a face!). Hence the creation of synthetic data in any domain remains a challenge and whose diminishing importance can be used to gauge the progress of AI in the coming years. Having an AI that can learn by itself through observation of its surrounding (like human babies do) is the holy grail for the future.

On the side of evolution gauged on the nature of the problem we have seen AI moving from being focussed on very specific areas (Expert Systems) to transforming into a cluster of agents that can be assembled, can auto coordinate, to address other areas. The future is towards a shift from complicated problems (that can be reduced to more elemental ones, each tackled by an agent available in the cluster) to complex problems that need to be tackled as a whole. However, words like “difficult” and “complex” are related to a specific point of view. Something that is difficult for us (like high precision etching) can be mastered by a machine, conversely something that is complex like making sense of what we see is a no brainer for us but very challenging for a machine.  In the article I read the author is actually claiming that AI is well suited to manage complex problems (in the sense of taking into account many independent variables) and what it really needs is the understanding of translating that complexity management into specific, sensible actions.

Although most of the time we think of AI as a brain able to “think” and face any problem, in reality we can see different forms of AI, to tackle very specific, although challenging, problems. One example is that of shoe lacing (watch the clip!). It is a no brainer for us (although it too some time to get the skill when we were kids) but it actually requires the ability to take into consideration a variety of factors. Nike decided to approach this from bottom up. Rather than having the capability to lace a shoe they designed a self lacing shoe, with plenty of sensors, electronics and software to dynamically adjust the lace tension to the type of activity and the swelling of the foot.

I felt this observation important for two reasons:

  • it makes clear that artificial in the artificial intelligence refers to a form of intelligence that is different from our, it is not replicating our intelligence (and this invalidates the criteria for assessing artificial intelligence by comparing it to ours)
  • it clarifies that -like in our case, as in other life form- intelligence in part of an envelop containing everything, you are not separating intelligence from the object, the soul from matters. Intelligence has to be seen as an integral part of an entity.

One consequence is that the future of artificial intelligence may be less tied to artificial intelligence itself and more to the overall context, like the intelligence of a swarm is less related to the intelligence of single bees and more to the essence of a swarm.

Put it into different words: the future of intelligence will not be injecting intelligence in the world of objects, rather in the design of intelligent ambient. Intelligence is born with the object, not infused in the object. An interesting perspective worth considering.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.

One comment

  1. Derrick de Kerckhove

    Derrick de Kerckhove
    June 24, 2020 at 10:26 am
    Your comment is awaiting moderation.

    I have recently been very intrigued by a challenge Lev Manovich has launched on FaceBook:
    “This post is called ‘Against Patternism’. How to think about objects, phenomena, artifacts, and people without comparing them? Without grouping things on some dimensions, and ignoring other dimensions? In the first decades of the 21st Century, ‘patternist’ paradigm expanded into many areas of society. Data science, statistics, data mining, data visualization, machine learning, AI – all rely on extracting, analyzing, and (often) acting on patterns, i. e. the common and the repeating. We need to come up with some alternative paradigm for knowledge, representation, and action. Why? Simply because we don’t want our thinking to be ruled by only one paradigm, no matter how effective it can be. We need alternatives.
    This is in my view is a key intellectual task for our era. We have readily embraced machines and tools that group, cluster, and categorize the world represented as data. We need a new generation of machines and tools to ungroup, uncluster, and uncategorize existing and future very big, big, medium and even very small data.”

    Add to this Yann LeCun’s recent work on self-supervised learning systems, one level above GANs, and maybe we are getting nearer to AGI than we thought (or at least, than I tought!)…