Home / Blog / DT evolution in Manufacturing – VII

DT evolution in Manufacturing – VII

The slide used by IBM in the presentation of the Cognitive Digital Twin concept at the Hannover Messe in 2018. The Cognitive part is seen as an add-on to the digital twin model and is used to take smart and flexible decision in understanding the environment, as perceived through IOT data and interact with it. Image credit: IBM

As mentioned the concept of Cognitive Digital Twin, first defined in the context of smart robots by IBM as a way to represent the knowledge of the robot, an ensemble of robots on the shop floor, has been applied to the representation of the knowledge of a person, becoming a “subset” of the Personal Digital Twin of that person. As a matter of fact the CDT could express the only set of characteristics of a given person because that is the set relevant in a given context, like:

  • knowledge management at a company level;
  • knowledge development in an education environment, like a college, university, training program
  • knowledge asset management at a personal level (what do I know, what should I know?)
  • as a trading asset in a business environment

One should recognise that the management of “personal knowledge” is trickier than the management of a machine knowledge, be it a robot or an application AI based from the point of view of mirroring what that person knows in terms of exploitation of there knowledge, in other words mirroring that person’s “executable knowledge”.

A person may:

  • know something but can be unable to apply that knowledge to the problem at hand
  • know something but be unable to face a given situation (e.g. stress) and apply that knowledge
  • might have known something and then forgot all about it
  • might know something and yet be unwilling to apply -share- that knowledge.

It should also be noticed that in the machine domain there are also tricky issues in knowledge representation and management, like:

  • A first embedded set of knowledge is embedded in the machine (application) both in terms of a static representation , models, data, procedures, and in terms of algorithms -how to make sense out of existing data and interactions. This first set is fully controlled by the designer and can be tested extensively. However, as more and more data becomes available this first set of knowledge may prove to be difficult to be tested exhaustively (think about the millions of images  used to train an image recognition application, like the one present in autonomous cars);
  • The first set of knowledge is expanded through the lifetime of the machine/application operation and it may become impossible to keep track AND to test the interpretation / implication of the new data accrued on the pre-programmed algorithms;
  • The new wave of artificial intelligence is not “pre-designed” nor “pre-programmed”, rather it is emerging from algorithms that are competing one another (like GAN – generative adversarial networks). Here, the designer teaches the AI how to learn by defining objectives and values, letting the AI to work out those algorithms that are better approaching the goal and maximising values. The AI builds up both a knowledge and a reasoning (this is what transform knowledge into executable knowledge) on its own and it becomes difficult to create a representation of that knowledge. The reality is that the only accurate representation is the AI itself, as in the case of the human knowledge the only accurate representation is the brain/mind itself and this can only become visible as it is executed.

From this discussion it is clear that any CDT, both associated to a machine and to a person, is at the very best a limited and often imprecise model of the real executable knowledge of its physical entity. As in many other areas of our “understanding” of the world, we have to make do with what we have. 

As long as the CDT proves to be useful, and we can control the potential shortcoming, it is fine. This is what is happening today. We have a tool that is not perfect but can help in the management of knowledge as an asset. 

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.