Home / Blog / Augmented Machines and Augmented Humans are converging III

Augmented Machines and Augmented Humans are converging III

A self-driving car has a variety of sensors providing data on the surrounding. These data are processed creating awareness and decision, intelligence, is taken based on a variety of factors. Image credit: Analogue Devices


The huge, growing amount of data we have available, is powering data analytics and artificial intelligence is taking advantage of that. Notice how Digital Twins have “embedded” some of the Vs characterising big data:

  • Volume: the volume of data aggregated in a digital twin varies considerably depending on the mirrored physical entity, but quite a few physical entities are bound to generate significant amount of data;
  • Velocity: the shadowing of a physical entity again varies considerably but here again we can have a significant “change rate”;
  • Variety: a digital twin may aggregate different streams of data (like the actual modelling of the entity –static-, the operation data –dynamic-, the context data -static and dynamic- and in addition it can harvest data from other interacting or connected digital twins;
  • Veracity: internal and external functionalities can authenticate data and ensure their veracity;
  • Value: digital twins are a way to create value in the digital transformation.

The first three characteristics support analytics and the emergence of intelligence.

There are four ways to “create” intelligence once we are moving into the world generated by the digital transformation and we leverage on digital twins:

  • Embedded intelligence– it is possible to embed both in the physical entity as well as in the digital entity (digital twin) processing capabilities resulting in intelligence. These processes can be an integral part of the entity itself or they can be intimately tied with it (an external intelligent function that is external in terms of hosting but it is to all effect an integral part of the entity functionality). Notice that in the future this might be the result of a superposition of a digital twin on its physical twin. In the graphics of a self-driving car it can be the intelligence residing in the car, providing the required awareness and understanding of the context and of its evolution.
  • Shared intelligence: it is possible to cluster single distributed intelligence into a whole that is intelligent (or more intelligent) that its single components. In the example of a self-driving car, communications among cars can result in an increased intelligence allowing the cars to navigate in a safer way since every car now can have a broader knowledge deriving from the sharing of individual knowledge.
  • Collective intelligence: an ensemble of entities creates a group intelligence that is the result of sharing and operation at a collective level, e.g. be employing common rules –being part of a common framework. This may be the case of self-driving cars once a common framework is established and all individual decisions are taken based on that framework. Notice the difference between a shared intelligence where knowledge is shared but intelligence is local, hence decisions can differ from one entity to another, from the collective intelligence where a framework ensures that all decisions are aligned. Bees are an example of collective intelligence, since each bee take decisions according to a predefined –common- framework. A team of humans operates in a shared intelligence model, each one taking independent decisions although each one is “influenced” by the shared knowledge.
  • Emerging intelligence: the intelligence is not present in the individual entities (although each one may have its own intelligence) rather in the derived global behaviour. This is the case in our brain where the intelligence emerges out of the independent, yet correlated and mutually influencing, activities of neuronal networks and single neurons. It is also the kind of intelligence shown by a swarm of bees (bees have a collective intelligence when they operate in a hive, then when they swarm in the thousands the swarm creates an emerging intelligence, seeming to know where it has to go…). In the example of self-driving cars their mutually influencing behaviour give rise to a dynamically regulated flow of traffic in a city that optimise travel time and use of resources seeming to be orchestrated by an intelligent controller whilst this is the result of an emerging intelligence deriving from the application of very basic rules.

An emerging intelligence can be the one generated by clusters of IoTs, once they reach certain thresholds. Most likely, IoTs can be seen as sensors whose data provide awareness. These data may be raw, simple data, or may be the result of an embedded processing (that in some complex IoT can generate some sort of “embedded” intelligence). Also, these data can be processed externally creating intelligence. One can take the smartphone as an example. Most advanced smartphones may have as many as 19 different kind of sensors (see image) and additionally they can act as aggregation hubs harvesting data generated by wearable (like smart watches, smart bands, smart dresses…). Smartphones have the processing, and storage, capacity to analyse these data and to create intelligence out of them. They can also connect to processing in a cloud (or any server) where data collected by hundreds of thousands of smartphones confluence. This might be the case of location data that can be used at a city level to understand traffic patterns and spot anomalies and traffic jams.

For its characteristics of communication hub and processing/storage capabilities the smartphone is becoming a component to interface and provide intelligence to a broad variety of everyday appliances, from lawnmowers to smart home.

The processing of data aiming at the creation of intelligence (meaning, understanding, decision taking) can be eased by specifically designed chips: neuromorphic chips, so called because their architecture mimics, to a point, the one of neural circuits in a brain.

It is not just a matter of hardware, it is very much a matter of software and in general neuromorphic chips come equipped with specific software components that can be used as building blocks. There are now several examples of neuromorphic chips, the first commercial one probably being SyNAPSE by IBM. More recently, NVIDIA and Intel have designed neuromorphic chips (the whole area of graphic processors units makes for a very good starting point in creating neuromorphic chips).

The market for these chips is expected to grow at a 20.7% CAGR over the 2016-2026 period with Asia taking the lion’s share with over 600 Mn$ followed by US with over 450 Mn$ (and the latest forecasts indicate an even greater growth pushed by China, Japan and South Korea).

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.