Home / Blog / The Dark Side of AI

The Dark Side of AI

A rifle using AI to aim a target. It includes a digital cameras and image recognition system coupled with AI for automated target recognition. Image credit: US Army Defence Blog

The ongoing Russia-Ukrainian war is bringing to the fore among many other things the role of technology and in particular the one of artificial intelligence.

This blog is not about politics nor taking sides and I’ll stick to that. However, let me say, loud and clear, that waging a war, any war, is not the way to solve issues. Independently of the reasons behind a conflict, a war cannot be justified.

Let’s go back to technology.

The use of artificial intelligence to augment effectiveness of weapons is not something for the future. It is a sad reality. Adoption of AI is pervasive in weapons and in warfare.  In the image an artificial augmented rifle with AI providing the capability to identify a target (a person, let’s be clear) through image recognition and automatically fine tune the aim to hit it.

We have read on newspapers the use of autonomous drones guided by artificial intelligence, war games simulating attack and defence and based on the simulation results army of autonomous robots taking over in the field.

Unfortunately, all this “intelligence” does not spare civilians, actually we are seeing more and more innocent by-standers being hit. Nor it is saving from economic destruction (with all its consequences in awful quality of life, famine, disease, …).

We are also seeing artificial intelligence being used to create fake news (the impersonation of the Ukrainian President asking the army to surrender is a point in case), propaganda, misinformation.

Hackers are using artificial intelligence for cyber-attacks disrupting crucial infrastructures.

A recent article reported on the use of artificial intelligence to identify substances that can be used as chemical weapons. In just 6 hours an AI software was able to identify 40,000 (FORTHY THOUSANDS!) lethal molecules that can be used in chemical warfare.

The concern of many people goes beyond the increased effectiveness in warfare that might be provided by AI. It is also about the creation of a monster that will take a life of its own and will no longer be controllable by its creators. As a matter of fact the implications, and dangers of applying AI to warfare, are weirder than what we might perceive at first glance: take a look a the TED clip discussing the use of AI in military field.

Yes, ‘n’ how many ears must one man have
Before he can hear people cry?
Yes, ‘n’ how many deaths will it take till he knows
That too many people have died?
The answer, my friend, is blowin’ in the wind,
The answer is blowin’ in the wind.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.