Home / Blog / The economics of the Digital Transformation – IV

The economics of the Digital Transformation – IV

My sketchy representation of the evolution of knowledge, first a matter of oral transmission and only residing in brains, then using writings to preserve it from one generation to the next and for learning. With the invention of the printing press written knowledge takes the upper hand with humans updating and expanding it and using it as learning base. The advent of computer has seen the shift to data and the mediation of sw to access knowledge recorded in data. Now we are seeing an explosion of knowledge well beyond a single brain capability to grasp, hence the shift towards distributed knowledge and the increasing reliance on tools to access and execute kowledge, with AI playing a growing role.

Data as such is just a bunch of bits and their value is zero, since there are plenty of bits, they can easily be duplicated at zero marginal cost as well as transmitted from any point to any other point. The capability to interpret data and to derive a meaning, on the other hand, is valuable, depending on … the meaning value.

You may remember the joke about that guy whose television went blank. He asked a technician help to fix the problem and looked as the technician repaired the TV set. It turned out it was a matter of turning a screw (a potentiometer) just a few degrees and everything went back to normal. When he was asked to pay 100$ for the repair he was flabbergasted: how can you charge 100$ just to turn a screw a tiny bit?! The technician replied: Oh, but that is free. The 100$ is for knowing which screw to turn and how much to turn it!  I guess you get the point. The value is not in the screw, rather in the knowledge about the screw.

Knowledge has always been considered as a valuable assets, from the very early times of humanity. In those first times knowledge was not tied to data repository, since it was embedded in the experience and oral tradition handed over from brain to brain. Only recently, say in the last 5,000 years –a blink on an eye in terms of human existence on Earth, after the invention of writing, knowledge started to get tied to stored data (clay tablet in the beginning) and these data started to be guarded to keep the ownership of knowledge. Guilds, later on, became the keeper of knowledge stored in data and in artisan expertise passed on from one member to an apprentice. Much later, thanks to printed press, knowledge started to be mostly stored into data and the point became to access that stored knowledge. Data became more and more valuable since the access to data equate to access knowledge.
Very, very recently this written knowledge, data, has expanded so much and has become so easily accessible that it started to dilute its value. Additionally, data have grown exponentially enveloping the written knowledge and providing raw material that can be used to extract more knowledge. The problem is that the sheer quantity of data makes it basically impossible to extract such knowledge by humans. A machine is needed. Think about the 68,493 GB of data produced every day by the LHC (Large Hadron Collider at CERN in Geneva) or the SKA (Square Kilometre Array) telescope producing 8,000 GB of data every second (!). An aircraft like the new Airbus 350 generates data from 400,000 points. Each of its two engines generates 1,000 GB of data per flight. It is impossible for a pilot to make sense of those data (acquire knowledge) without the mediation of a machine (software).
These numbers are so huge that it is difficult just to grasp them. 

A side issue is that the data space is not just huge, it is also littered with spurious data, incorrect / fake data to the point that separating the wheat from the chaff has become a major challenge and, consequently, ensuring correctness of data is valuable also from an economic standpoint.
Today, more and more, the value resides in “executable” knowledge, that is the capability to execute here and now in the best possible way.

The problem is that on the one hand the human capability to digest the expanding recorded knowledge space, even in specific area, is being challenged and on the other that the machines (AI) are not just becoming a contender in accruing and executing but also in creating knowledge.

In a way, there is a shift from the value of data to the value of knowledge. At the same time, the capability of AI to create (executable) knowledge out of data is reinforcing the value of those data used in the creation process. In the last 15 years the increasing capability of AI has been based on access to larger and larger data sets made possible by increased computation capabilities and availability of data. Hence, companies that could generate/harvest data found themselves in the ideal position to leverage on them (interestingly, harvesting data and processing them require the same underlying structure: large data centres). The exponential growth meant a parallel growth in the investment required for the supporting infrastructures leading to a concentration of data, knowledge and value in a few companies (read Google, Amazon, Apple, Facebook…).

 

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.