Home / Blog / Megatrends for this decade – XL

Megatrends for this decade – XL

A general framework for Intelligent Process Automation, layered on Robotic Process Automation. The Artificial Intelligence supports specific functionalities and creates an emerging, cognitive intelligence. The graphic outlines a comprehensive architecture identifying the various components that are embedding artificial intelligence. Image credit: Reply

Artificial Intelligence driving automation

Many industries have adopted automation in both production, using robots – RPA: Robot Process Automation -, and document flow (process automation). This has had a significant impact on the workforce, both in terms of downsizing (activities moved from blue collars to robots and from white collars -clerks- to computers and data bases) and in terms of required skills (interacting with robots and computers require specific skills and training).

In the last decade artificial intelligence has  become a service either located in specific AI service providers /data-service centre or embedded in tools used by industry (smart robots, smart applications …). What it is expected in this decade is for AI to take a seat in the control room of companies, being fed and learning, by multiple streams, most of them internally generated by employees, machines and processes. This is known as IPA: Intelligent Process Automation.

There are plenty of data being generated and available, however only a small percentage of them are actually used and leveraged. Image credit: Talent Alpha

The reality of today’s business world, and even more so tomorrow, is the flooding of data. Everything is either creating data or being “substituted” by data (mirrored and operated through its mirror image). The Digital Transformation is at the core of this trend. However, analyses show that only a fraction of the digital potential offered by this world of data is actually being exploited. According to a report by Talent Alpha on “7 drivers shaping the future of work” 88% of Europe digital potential is not used, and the US are not much better exploiting just 18% of their digital potential.

A fundamental issue is that there are just too many data, and their sheer volume is beyond human capabilities. Hence the need to turn to artificial intelligence, not for replacing “human intelligence” but to do something that humans cannot do and return an intelligence that can interact with the human’s one. This point is addressed in the next post.

The point now is that by using artificial intelligence to continuously explore the data landscape it is possible to extract and contextualise meaning in ways that have not been possible before (also because te amount of data available today was not previously available).

Snapshot on the amount of data produced by sensors in a self-driving car. The total amount is impressive (with the lion’s share played by the video cameras): a hour drive produces between 1.4 and 19 TB of data. Image credit: Tuxera

Consider the example of a self-driving car. The car has to harvest internal and external data and make sense out of them to take decisions.

For this there are plenty of sensors, as shown in the table on the side:
– Radar, for obstacle detection
– LIDAR and Cameras, for creating a map of the surrounding environment
– Ultrasonic, for near field obstacle detection
– GNSS (Global Navigation Satellite System) and IMU (Inertial Measurement Unit) to pinpoint the car position

The above sensors are just for providing context awareness. In addition the self-driving car control unit needs to have a digital model of the car describing its shape, volume, performances and needs to have data from the various active parts (like the engine, wheels, brakes, suspensions….) in order to know the slate of possible actions.

As indicated in the table the amount of data is huge. Most of these data are time sensitive and lose value after little while (once the car has left an area all the relative data are no longer useful). However, the combination of the data acquired with the result of actions taken by the car provides further data that increase the experience, hence allows the car, the auto-drive system, to learn. The learningt can be shared with other cars, thus increasing the speed of learning and preventing wrong decisions in cars that will be facing a similar situation for the first time.

What goes for a self-driving car goes for airplanes, trains … and of course it goes for robots in a manufacturing plan.

In the case of self-driving cars most of the decisions are local – taken by the car, in the car- (very limited amount of shared decisions with other cars, also because a self-driving car cannot assume, it would be wrong!, that other cars can share data and decisions, nor be notifiable of decisions taken). Most importantly, there are parameters that are completely out of a system wide analysis possibility, like pedestrians, bikers, dogs,… It is obvious that a car cannot communicate with any of these, needs to make some assumption, and play it safe. In the case of airplanes the overall system is much more controllable (with the exception of taxiing, and here there are some studies to do that) and there are systems for autonomous aircraft to aircraft communications, like the TCAS – collision avoidance system.

Schematics showing a layered architecture for automation in factory/industry environment based on IIOT. At the lowest layer is the connectivity infrastructure, LTE or (private) 5G, then data processing and integration through commercial off-the-shelf computers -COTS-, operating systems, Virtual Machines and Containers, with the upper layer connecting through a variety of protocols to the Cloud/Edge where AI takes over the data analysis and learning. Image credit: ARC Advisory Group

In an industrial environment, on the shop floor, in assembly lines, in warehouses, there is a growing flow of data, courtesy of IIoT – Industrial IoT, that can easily exceed the ones produced by a self driving car.

These data are being used for:

descriptive analysis, providing info on what is happening and what happened (inventory, failures, output demand….)
predictive analysisp, providing info on possible malfunctions, output request by single machines over the coming weeks, resources, including workforce, needed in the coming weeks…
prescriptive analysis, providing info on when to activate procurement, and where to procure, activation of pre-emptive maintenance, different allocation of resources, fine tuning of processes…

In all three areas there is a growing use of artificial intelligence to support the automation of the various processes involved.
Notice that the trend on the shop floor (and in manufacturing in general, along with the Industry 4.0 paradigm) is to analyse the whole picture and act directly or indirectly on the whole picture, sometimes involving suppliers and dealers (up to the user).
The intelligence needed is not the one localised in a machine (robot), in a plant, in a supplier… rather it is the emerging intelligence deriving from the cooperation of all “intelligent components”.
This is the big challenge ahead for industry (that is what Industry 4.0 is all about) and in this decade we can expect an increased automation at the global level, throughout the whole value chain. The starting point, obviously, is the emerging intelligence on the shop floor, in warehouses, in the supply and delivery chain. These separate intelligences (each one with a specific “owner”) will cooperate resulting in a global emerging intelligence.

This clearly has an impact on the workforce, since automation is shifting control activity from humans to machines (to the Cloud and Edge). In the past decade we have seen automation affecting at the micro scale, a robot replacing a worker/a team of workers. Now we are facing process automation that simply renders several activities unnecessary. The use of Digital Twins is further accelerating the shift to the cyberspace and accelerate process automation.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.