Home / Blog / Megatrends for this decade – XLI

Megatrends for this decade – XLI

A graphic showing the broad spectrum covered by Artificial intelligence. I have marked in blue the areas where AI is most likely to take the upper hand in this decade on the workplace nd in green the ones where cooperation between human and AI would result in human augmentation. Loss of jobs due to AI adoption can be expected in the blue areas, increased productivity through human-AI cooperation in the green areas. Image credit: LaptrinhX, marks by me

Distributed knowledge shared by humans and machines

Artificial intelligence is becoming more and more pervasive and able to pick up a number of activities that up to now have been carried out by us (humans). This is the “automation” part of the story and automation is no longer restricted to manual activities but, because of artificial intelligence is expanding to soft, mental, activities. In other words we are moving from muscle automation to intelligence automation.

As shown in the graphic, artificial intelligence covers many areas, in some the prevalence of AI over human intelligence is evident, in others we see an advantage in a cooperation among the two.


  • Machine control. The speed required in controlling the operation of robots or of softbots is making impossible to use human intelligence. The human may define the operation framework and impose some boundaries that once reached impose a stop to the machine, but in normal operation artificial intelligence is at work.
  • Searching and evaluating is beyond human capability once the data set exceeds certain volumes, and this is more and more the case. In these situations the humans can provide the search criteria and the boundary conditions but the actual search can only be performed through AI. A “human-google” is simply impossible.
  • Data analytics (descriptive, predictive and prescriptive) is also beyond human capabilities because of the huge volumes involved. Again, humans can define some targets (what is the goal of the analytics) but have to rely on AI for the actual data crunching.
  • Machine learning, by definition, is in the realm of AI. However, as noted in the graphic, machine learning can be leveraged to contribute to the human learning and vice versa, humans, by identifying contexts and data sets, can steer the machine learning. Notice that the recent use of GANs is decreasing the human role in steering the  machine learning (it remains however the role of humans in the definition of “what should be learned”).
  • Automated reasoning and knowledge representation (in machine readable form) are clearly in the specific field of AI.


  • Natural Language Language Processing – NPL-, Vision (image and context recognition), Speech (voice recognition) are all contributing to context perception (meaning, emotion detection, feelings,…). These are areas where humans show greater capability “in the small”, meaning that in single instances humans are far better in converting the flow of “data” into a perception of “what is going on”. However, “in the large” humans are limited in their ability of processing. As an example, a single person can only understand a few languages, AI can process a hundred of them, a human can follow a very limited number of parallel conversations, signal processing can separate a flow of voices into streams and a NPL application can be instantiated as many times as needed to process all streams in parallel. Vision acuity of humans is limited, the one of machines can be expanded over the human limits… For this decade, at least, cooperation of human and machines, with machines performing bulk work and humans finely tuning results would lead to best result. Furthermore, the capabilities of machines to use voice based and visual clues communications improve the possibility of collaboration with humans. Chatbots are an example of application.
  • Problem solving is an area that often requires stepping out of the box, leveraging on creative thinking, something that human (experts) are usually better than machines. At the same time, the evaluation of a possibile solution and of all its implications/requirements may require in depth analyses and huge data crunching, (like cost evaluation, supply chain re-engineering, weak effects impacts…) something that machines are better and faster. Hence a tight collaboration with humans exploring the big pictures and machines working out the details and performing simulation is likely to be the way to go for this decade (and the following ones). However, humans will need to learn to use “machine intelligence”.
  • Learning (supervised, unsupervised, reinforced) has been a characteristic of humans but AI has made huge progress in this area, becoming faster and faster (it takes years to an individual to become a master of chess/Go -and only very few can/will, a machine can become as proficient as a master in 24 hours). However, AI can be used by humans to accelerate their learning processes and most definitely can be used to flank and complement their knowledge. Learning knowledge is more and more associated to learn how to access -and make sense of- knowledge. This, and future decades will be characterised by the augmentation of human knowledge through machine.
  • Distributed Intelligence (DAI-Distributed AI), Parallel/Distributed Parameter Servers, Multi Agent Systems and Swarm Intelligence are technologies in rapid evolution to deliver better AI. Distributed Intelligence is also a characteristic of human societies and communications infrastructures, digitalisation of knowledge and knowledge organisations (like Universities, IEEE, research centres, Open research frameworks) have increased enormously the leverage on distributed intelligence. The difference from the past (technology has always played a role in growth and leverage of distributed intelligence, think about the invention of writing, books, printed press, mail services, telecommunications, internet) is that now and in the future it becomes possible to have a distribution of intelligence involving machines and humans as intelligent nodes (machines until few years ago have been “repository”, not active node of intelligence).

In assessing the impact on work and workforce of a distributed knowledge (and intelligence) shared by humans and machines we should also consider the differences between the two (the idea that artificial intelligence is a replica of the human one has lost appeal, we are now looking at two different, although valuable, forms of intelligence):

  • Creativity, serendipity – on the human side these characteristics are not just important, they seem to be an integral component of human intelligence. It is the capability of imagination, of thinking out of the box. Machines’ intelligence is lacking these characteristics, although we see results of AI that are similar to result of creativity, like music composition, paintings, even poetry.
  • Creativity as self-fulfilment, self-motivation – on the human side we see that creativity is leading to more creativity through a process of self-appreciation (pleasure of having done something). This is completely missing in machines (an algorithm is not “happy” nor feeling good after having achieved a result).
  • Cost – intelligence is engrained in humans from birth and just keeps “growing”, it does not cost anything. On the other hand, the cost associated to the growth of intelligence, time, investment in learning, exposure to specific experiences is really high and grows exponentially (human intelligence tends to grow asymptotically, after a while, each further tiny increase requires more and more effort). On the machine side, artificial intelligence has huge upstart cost but then it runs basically for free. Creating an artificial intelligent algorithm is quite complex and the cost can vary significantly, depending on the quality of data available and several other factors. In this decade the cost is likely to decrease and more and more companies will be able to develop their “local” intelligence. Bringing together, aggregating, several intelligences will remain for awhile a research endeavour, something that is being addressed in the FDC Digital Reality Initiative in 2021.
  • Permanence of knowledge and intelligence- human intelligence is tied to a specific person, moving it from that person to another one takes a lot of time and the results are not guaranteed. On the contrary, moving intelligence from one machine to another is quite straightforward. Transfer human knowledge is likewise time-
    Representation of the Ebbinghaus “forgetting curve”. We tend to forget pretty quickly what we are learning, halving the retention after an hour and retaining just one fourth after one week. Image credit: senseandsensation.com

    consuming if attempted directly, from one person to another (it depends on the existing gap of knowledge between the two persons and on how receptive the receiving person can be), much more effective if it is done through a medium (the first person writes down the knowledge on a book and other people can read that book to acquire the knowledge) but still quite time consuming (both in writing and reading/learning). Also notice that not all knowledge can be transferred through a medium. You can read a whole enciclopedia on how to ride a bike but you will discover that you cannot learn to ride a bike unless you try it over and over.
    Another aspect of human knowledge is that over time we forget … In case of machine the transfer of knowledge is easy and machines don’t forget… The distribution of intelligence and knowledge among human and machines can also be used as a continuous refresher of the human’s memory, increasing intellectual performances, in a way augmenting human memory.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.