AI: a self sustaining evolution – III, AI in the Small

A graphic representation of the stack delivering AI in the small. Notice the feedback from AI use into the continuous refinement of the model (green dashed lines on the left). This also happens (may happen) through the use of AI in commercial applications (red dotted line on the right) and this is what generates questions on privacy.

Let me spend a few more words on this “AI in the small” from the point of view of the evolution of AI (both as a consequence and as a fuel to its evolution).

As shown in the graphic (bear with me, I am not a professional designer and I didn’t ask AI to create the diagram) there are a few big players that have invested, and are investing, money and resources in the training of transformers (by accruing billions and billions of data). These players offer API -application programming interface- (and web interfaces) that can be used by third parties, companies and individuals, to generate value (Commmercial Applications – on the top right hand of the stack).

Those same APIs can also be used by a company to interface with a Provate Specialised Model that contains, and is continuously being expnaded, refined, through the data that company is generating and harvesting in its operation. As indicated in the graphic these data derive from:

  • IoT generated data: the pervasive presence of IoT both in the supply chain, shop-floor, distribution chain and in the products themselves (including virtual IoT embedded in services and software packages) keeps generating data that are both significant in volume (they  pile up over time)and in patterns (the way  they are arriving generating a variety of correlation output). Most of these data, today, are used locally -like to control a robot- and then they are discarded. The value that can be derived from their correlation and the knowledge that can be derived from inserting the correlations in a model is not leveraged.
  • Data Processing at the Edge: clusters of IoTs as well as devices, equipment connected in a local area network can engage in a variety of interactions. These can be recorded and processed, along with other data generated by IoT, generating further data that express the “meaning” of what is going on. All these data resulting from processing at the edge can be used to enrich the model and power local intelligence.
  • Internal Knowledge: any company has a rich set of internal knowledge, how it operates, the resources managed (including workers) and the way they are orchestrated, the relations it has with the various players in the supply chain, the ones with their customers, the vision and understanding  of the market … plus  the processes being used. In the end what characterise a company is the set of processes through which it operates. There  are several ways of formalising this internal knowledge and to keep it up to date. One way is through the use of Cognitive Digtal Twins.
  • All data, and interactions, resulting from the use of the company created intelligence,  i.e. how it is being used, what is found appropriate and what is leading to further refinement, be it from company resources and from the end users, in case these are provided access to the company intelligence -an interesting proposition that can result in service offering!- can also become part, lead to a refinement of, the Private Specialised Model, as indicated by the green dashed line  (this flow of data is indicated differently from the previous one because it is an integral part of the artificial intelligence feature, leading to a continuous learning and improvement). Notice that I have also indicated a similar process in the Commercial Applications (red dotted arrow). This  is, by  the way, one of the issues that have been raised by those opposing -or just criticising- the new wave of generative AI, the potential loss of privacy. The capturing of the interactions and data implied in its use is clearly revealing aspects of the user (this is no different to the use of a search engine where it learns what you are searching and therefore deduce information about you!).  OpenAI is stating in its accompanying information on ChatGPT use that prompt strings submitted via API are not recorded whilst the ones submitted through a form (that is most likely what you  are doing  in using  ChatGPT, unless you are a programmer and have developed an app to access it -and payed for the access-) can be used to improve the algorithm (a subtle way to tell you that they can do whatever they like with your prompts and with the answers generated).

This knowledge, along with all the data created and harvested, can become an integral part of the Enterprise model forming the Private Specialised Model indicated in the graphic.

There is an enormous value in this possibility of flanking the LLM provided by the big guns through the Commercial API with a Private Specialised Model. The first feedback coming from industry  (we are just starting to see this happening) indicate a gain in productivity of the order of 5 to 15%.

That’s a lot. It can transform a company into a leader in a given market sector. particularly so for those companies operating in consolidated market where the winning factor is competition on price.

To get a feeling on how this can be done read this article.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.