Home / Blog / Simulating the human brain: an exascale effort

Simulating the human brain: an exascale effort

An exascale computer is under development, expecting to start crunching numbers in 2018. Image credit: WT VOX

As of Spring 2018 the fastest computer is the Sunway Taihulight, Wuxi – China. It has 10,649,600 processing cores, clustered in group of 260 each and delivering an overall performance of 125.44 PetaFLOPS (million of billions of instructions per second) requiring some 20MW of power.

In the US the National Strategic Computing Initiative aims at developing the first exascale computer (8 times faster than the Sunway Taihulight computer) and the race is on against China, South Korea and Europe. We might be seeing the winner this year (next month the top 500 computers list will be revised -it happens twice a year).

These supercomputers are used today in studying the Earth climate and earthquakes, simulating weapons effect, designing new drugs, simulating the folding of proteins. What would an exascale computer be useful for?

The Human Brain Project is looking at an exascale computer availability to simulate the human brain. This latter is believed to have a processing power in that range (actually the calculation is very difficult because you have to take some assumptions and it is difficult to validate them as a whole). You can look at the number of neurones, 100 billions, the number of synapses (135 trillions), take into account the firing frequencies and the latency and yet you are still playing with a mechanical machine whilst the brain is an “experience” machine. Remember what the grand master Kasparov replied to those asking him how many chess moves he would consider: only those that make sense!
Of course the trick is to know which ones make sense and this is something the brain learns through experience. The brain of a professional basketball player works a lot more than my brain when trying to send the ball through the hoop, that’s because practice has made the brain skilled.

Artificial Intelligence is now starting to provide computers with the flexibility of our brain, practice makes them better. But this means that you no longer can discuss a supercomputer performance in terms of its hardware structure only. Our brain is both hardware and software and there is no way of taking the two apart.

Microsoft has recently pointed out the need to use flexible hardware infrastructures and it is betting on Field-Programmable Data Array -FPGA- to meet the need of Artificial Intelligence.

A recent article in Frontiers in NeuroInformatics is reporting a new algorithm designed to run on exascale computers to simulate the complete human brain by taking into account its 100 billion neurones and their interconnections. It is not -just- a matter of processing power. The simulation is done through the interconnection of many processing nodes, each one needing the storage capacity to run it and that storage capacity depends on the number of neurones being simulated at that specific node. As the number of simulated neurones grow the required storage skyrocket. Even with the envisage increase of power delivered by an exascale computer the memory requirement will be 100 times larger than the one available using current simulation algorithm. Not so with the new algorithm reported.

With all the caveats I mentioned before (that a brain is an experience machine, not a mechanical machine) we are moving forward to the possibility of simulating a human brain. An interesting question would be if out of that simulation we will see the emergence of feelings? That supercomputer will be able to see and recognise a face as we do, but will it fall in love with it?

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.