You can’t be smart without understanding the context you operate in and the very first step is becoming aware of what is going on. You need to sense your environment.
The vision of a world that can be understood by scattering sensors all around was articulated by HP in the first decade of this century with the project CeNSE, Central Nervous System for the Earth. The project had Shell as the first customer, interested in using sensors to detect oil reservoirs by measuring vibration patterns induced by micro explosions. HP was foreseeing a world where every object had sensors embedded and these sensors were forming a network, a nervous system, collecting data that could be
processed centrally. Every bridge, every road would be part of it. Bolts and nuts connecting the various parts of a bridge would embed sensors communicating with one another the local stress and pressure; these data would be used to monitor the bridge and the movement of the connected banks. Sensors embedded in the tarmac would capture the vibration created by vehicles and signal processing would be able to tell traffic patterns and even differentiate distinct types of vehicles: a micro sensing that could provide the data for a macro assessment of the environment.
A different approach to sensing is the one we are using everyday and that has been perfected through million years of evolution: sight. Image detectors have become extremely effective (high resolution and low cost) and computer vision, in these last years, has progressed enormously. It leverages on image processing (detection of edges, sorting out shadows,…), on machine learning to understand what is there, and it is now being used in machine vision (determine what is there, like a rusted pole needing maintenance, or the identification of a vehicle plate number) and in robot vision (e.g. to move in an ambient).
Lately, and more so in the future, smart materials have acquired sensing capabilities so that any object will embed sensing in a native way. Smart materials can sense and react to a variety of stimuli, with piezoelectricity taking the lion share. This means sensing pressure, including touch, and releasing an electrical current that is proportional to the pressure applied, hence measuring that pressure.
All this enhanced and distributed sensing is creating data that can be processed both locally and at various hierarchical stages. And processing is what can deliver value!
By mirroring the physical world creating a digital copy we can analyse the digital copy as it is (mirror), we can compare it with what it was (thread) and we can keep it in synch as the physical entity evolves (shadow).
Through data analytics on the digital copy and on other digital entities loosely connected with it in the physical space we can assess what is going on and find out “why”. Then we can infer what might be happening next and work out ways to steer the evolution in a more desirable direction (or at least limiting the negative aspects).
- what is going on,
- what will happen,
- how to change the predicted evolution
is what creates value.
Making sense of data is crucial, it also opens up the door to have data representing reality, creating a digital model of reality. Reality exists only in the present, however, data can represent the present and the past, they can develop a thread (and can be used to forecast the future!). This is what digital twins are all about. Of course once you are mirroring reality into a digital representation you are creating a snapshot that has a very short life. In order to keep the mirror faithful to reality you need to make sure that it is in synch with it. You need to create a digital shadows that keeps the digital model up to date. This makes the digital twin useful, since you can use it trusting that it keeps representing the physical reality.
Digital twins have been evolving over the last ten years from being a simple representation of a real entity at a certain point (like at the design stage) to become a shadow of the physical entity kept in synch through IoT (sensors).
In the coming years we may expect a (partial) fusion between a digital twin and its physical twin. The resulting reality, the one that we will perceive, will exist partially in the physical world and partly in the cyberspace but the dividing line will get fuzzy and most likely will not be perceived by us. To us cyberspace and physical reality will be our perceived reality.