Home / Blog / Regulating AI, like herding cats?

Regulating AI, like herding cats?

The European risk based approach to AI regulation.Image credit: EU

I participated to an event organised by Anitec Assinform -the association of Italian industry operating in the information technology area- on the topic of AI regulation. The EU is getting close to finalise its AI Act and industry is looking at the implication of such regulatory framework.

The meeting was well attended by regulator experts, I was there, along with a few others to learn the regulation side and to point out technical aspects. Indeed the meeting was interesting, to me and I take to many others.

One thing that impressed me was the consensus among those working in the regulatory side that there are concern on the (unintended) consequences of a European regulation in a global context where the other two main players (China and US) are taking different approaches.

What follows is my taking from the discussion, sprinkled with observation from (my) technical viewpoint (but I should also say that in sideline discussion with some participants I noticed very similar perspectives).

  • it is good to have joint discussion among tech and law/regulation people. The regulators is often engaged in regulating the past. Where the future comes so soon and changes the rule of the game, as it is the case for AI, it is essential for regulators to be aware of the technology trends and of the implications. In these situations it makes more sense to work on steering the regulatory framework than to attempt freezing it.
  • the EU is focussing on preserving European values as the starting point for AI regulation. Among this of crucial importance is the anthropocentric approach. One has to note that technology is always subject to have its value linked to the way (and place) it is applied. This is also the case for AI and the EU approach, based on level of risk, as shown in the graphic, takes this into account. However, one should also notice the evolution in these last five years where the growing consensus is about steering AI from being a mirror of human intelligence (and potentially an alternative/substitute) to become a tool flanking humans aiming at augmenting human intelligence (from Artificial Intelligence to Intelligence Augmentation).
  • Since AI has been based on the use of large set of data for training, and it has been recognised that an intrinsic bias in these data sets leads to bias in AI, there is a strong interest in placing rules on the data sets. However, in these last years/months we are seeing a growing trend of an AI that is self teaching doing this without the need of data (using different sort of data, self created -synthetic data, or/and freely harvested from the environment). This can make the attempt to regulate data sources moot.
  • A point that would seem non-negotiable for the European culture is the ban to any form of social credit (widely accepted in several parts of Asia, China and Singapore being point in case). However, one should notice that in the Western world there is a broad adoption of social credit: when you ask for a mortgage the bank is assessing your social credit (risk factor),  when you apply to a company it is most likely that the HR will be looking at what you do on social networks and what other people “think” of you, when we go to a restaurant it has become the norm to che the “social credit” of that restaurant, when you rent on Airbnb you are ranking the owner place and the owner can look at your social credit (created by comments from previous owners you rented from…), Likewise when you buy on Amazon, when you go on eBay … the list is endless. So let’s not fool ourselves. Also in the Western world social credit is a way of life.
  • One has to wonder if regulation in this area can be enforced: in the cyberspace the whole world is one clock away. What may be forbidden here is allowed there and industry as well as consumers roam the cyberspace to get what they feel is best for them. On the other hand one might suppose that some form of regulation can be enforced, particularly on industry. In that case the question is that given the local effect and the global market isn’t this effectiveness in limiting what can be done going to impact (negatively) innovation and competition? Are regulations going to lead to unfair competition?
  • One question is whether it would be better to regulate ex-post (if you do something that is harmful you have to pay the consequences) – the US approach rather than ex ante (you can’t do anything unless it is allowed) – the EU approach.
  • As final point, regulating AI is really tricky, given its evolution. You can test if an AI satisfy the imposed requirements but as you are testing it the AI system learns and changes so that by the end of the test it is no longer what you tested in the first place….

Well, food for thought.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.