Home / Blog / 1.2 trillion transistor on a chip to fuel AI

1.2 trillion transistor on a chip to fuel AI

A monster chip, as big as a tablet, provides over 1 trillion transistors to power AI algorithms. Image credit: Cerebras

Chips are small, probably that was one of the reasons to call then chips and not boulders in the first place …. An average chip stays on the tip of one of your fingernail and even the very large chips used as sensors in digital cameras can stay on the palm of your hand with plenty of space to spare.

There is one exception that has just popped up: a chip designed and offered by Cerebras to support artificial intelligence.: the Wafer Scale Engine (WSE)

It is a monster chip, some 56 times larger that a large chip with a 46,225 square mm containing 400,000 AI optimised cores with a total of 1.2 trillion transistors. The largest chip we have today is a graphic processing unit with a 815 square mm surface and 21.1 billion transistors (by NVIDIA).

Why would we need such a monster chip? Basically because so far (in these last decades) the progress in artificial intelligence has been tied (and made possible) to the huge data sets and the increased processing power to analyse them. Looking at the evolution of AI it is estimated that in the period 2012-2018 the amount of processing used for AI has doubled every 3 months and a half (this means an increasing use of processing power of 16.777 million times!) and this takes into account only those AI experiments that have been published.

The energy cost to support the processing needed for training a natural language software has been found to be in the order of 350,000$. The training of a software bot to play the Data 2 game took several weeks and needed hundreds of GPUs to support the deep learning algorithm.

Now, if you keep in mind these figures, having a monster chip (whose price has not been announced so far but can be estimated in the millions of dollars) supporting this kind of number crunching starts to make sense.

In the Cerebras chip data can move around the 400,000 cores 1,000 times faster than if they had to move from one GPU to another and at a much lower energy expense.

The manufacturing of the chip is based on industrial 300mm wafer (out of that you can cut out the 22 cm square) but it has required a change in the layering and etching since in normal chip manufacturing the 300mm wafer is actually etched in smaller squares, each one ending up as a chip. In the Cerebras case they had to interconnect the various squares to create a single chip.

Cerebras is expecting to deliver full systems built on the chip, rather than the chip itself. These systems will support AI algorithms applicable to a variety of fields, including drug design.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.