Home / Blog / DT evolution in Manufacturing – II

DT evolution in Manufacturing – II

The manufacturing process is quite complex involving many resources and many players within the shop floor and through the supply chain, This generates a huge flow of data, many GB per each single product, that can be collected and used through and after the manufacturing is completed. This is what happens in assembly lines like Tesla where digital twins of equipment are working in synch also resulting in an extension of the product (here a car) digital twin “instance”. Image credit: Atria Innovations

A digital model is fine in the design phase. Actually we are hearing a new word: virtual twin. This is a model of a physical entity that, as a matter of fact does not exist and may never exist in the physical space. We create the “idea” of an entity and we keep that entity in the cyberspace, ready to interact with other entities both in the cyberspace and in the physical space. This provides industry with great flexibility and what used to be a step in the design phase that would have resulted in a (physical) product becomes a soft product that can be sold on the market.

The physical dimension remains, nevertheless, crucial and a virtual twin derives its value in its capability to interact directly or indirectly with the physical world.

The “mirroring” of the physical world implies the capability to remain up-to-date on the current status of the physical world. In the digital twin this is done through the shadowing of the physical entity. The updates may be generated by the physical entity itself (through embedded IoT) or they can come from the environment (as an example video cameras in the assembly line on the shop floor reporting what is going on in terms of video streams. These streams are analysed by image recognition AI that produces data “describing” the current status). At a factory lever we are more and more seeing a blending of data coming from the robots on the production line and those coming from various types of cameras. Additionally, data may be derived from the interactions taking place among workers and between workers and machines. The whole factory is becoming an aggregate of digital twins interacting with their physical counterparts and among one another.

Additionally, the assembly process may result in the assembly of the digital twin of the product that will be including as part of its thread the data related to its manufacturing. The “construction” of the digital twin flanks the construction of its physical entity. This requires a new way of looking at the manufacturing process.

Digital twins can also support embedded simulation of the status of their physical entity or can be used by an external application to simulate the likely status of the physical entity. This will need to be “confirmed” by data retrieved from the physical space. As an example, an engine on a flying plane will be reporting data (pressure, fuel flow, thrust, …) at predetermined interval and the digital twin will be matching those data with the ones derived through simulation applied to the digital model. In case of discrepancy the DT or an external function will need to work out the reason why, and take appropriate actions (these can also include a refinement of the digital model). Notice that if it is the DT that carries out the analyses this DT is, matter of fact, a significant extension of the DT concept.

All the shadowed data add up to the digital twin thread and become a source of “intelligence” for both that specific digital twin and for other instances of that digital twin. This is a very important possibility that opens the door to the provisioning of services flanking the product. This is what Tesla does. Through data analytics Tesla can monitor the behaviour of some 2 millions cars it produced since 2009 and can assess both issues on a specific car as well as issues derived from production of a given batch of cars. Furthermore, the derived data are used to continuously refine the manufacturing process.  Information derived from shadowing and data analytics on threads is used to provide customers with operation and maintenance support.

The thread includes both data derived from the physical twin as well as those that can be acquired from the context of the physical twin. A growing part of the thread is formed by the analyses of the effect of interactions between the DT and the PhyT. In other words the digital twin is now expanding to include knowledge and understanding. This is relatively new and it marks a departure from the original concept of Digital Twin.

What this means is that the Digital Twin that used to be a lesser entity with respect to its physical twin (because it represented a subset of the physical twin) is now expanding to become “larger” (for some aspects) than its physical entity. In turns this means that industry -and market- will start to use the DT more and more to derive features that would not be available in the PhyT.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.