Home / Blog / Can AI use far less computational power?

Can AI use far less computational power?

Artificial Intelligence, AI, started as a “software” story but it became a “hardware” story, its advances depending on processing capabilities. Image credit: Mind AI logo

A very interesting article was published in June 2018 on Forbes describing how Artificial Intelligence progress in the last decade has been basically tied to the increased availability, in the core and at the edges, of processing power.

In an interview  Jihan Wu, CEO of Bitman -a Chinese company specialised in support to cryptocurrency, mining equipment-, makes a point in the need for a dramatic change on AI. The future is not going to be based on increasingly huge computational capabilities supporting AI, rather on much limited processing delivering AI through the exploitation of different architectures and algorithms.

In a way this is  one of the goals of flagship projects, like The Human Brain,  to understand how Intelligence can emerge from networks of basically simple entities (neurones) to duplicate this into artificial networks to generate AI.

The bet of Wu is not the same of mainstream companies, like NVIDIA, that are actually betting on the need for more and more computational power and work hard to keep delivering it.

Yet, as pointed out in the interview, there are many reasons to look for an alternative. Today’s AI platforms use (waste?) a lot of electrical power (the training of a single algorithm on a specific area requires hundreds of thousands of dollars) engaging huge data centres in the process, AI chips are expensive and you need quite a few of them.

In addition it seems quite a unreasonable approach to use thousands of images of dogs, as pointed out in the interview, to eventually be able recognising a dog. That is so much more than what it takes to a little child to recognise a dog (although it takes a longer time for a child to build up the conceptual image of a dog…). In turns, the need for huge amount of data is creating oligopolies in the AI sector, since only those companies that can have huge amount of data can compete (and on the sideline it is also pushing more and more companies to acquire data just in case… with potential privacy issues).

An interesting point raised is the fact that today’s AI is derived through means that are not accessible to humans, hence it is difficult to understand a machine’s reasoning. On the contrary, we can often follow the reasoning of our fellows since we share the same thoughts processing approach and this makes accepting the result much easier.

Wu is not alone. Companies like Mind AI are leading a new wave on AI based not as much on processing rather on new data structure. Will see in the next decade if they will be successful.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.