When computer started to become part of our imagination, some 60-70 years ago, people called them “electronic brains” and the next step was to associate the transistor to the neurone. Physiologists told that a neurone works like a switch, upon reception of signals got excited and the “fired” stimulating other neurones. The excitation was a function of the number of signals received and the time since the last “firing”, so a bit more complex than a transistor (this latter “fires” when a certain input threshold is reached and the “firing” intensity is proportional to the input signal).
Actually, the “memory” that characterise the neurone reaction could be best mirrored by the memristor, but that was discovered later.
What become clear as we learnt more on the actual workings of the neurone is that the situation was way more complex and could not be mirrored by a single transistor. It would take several transistors and a quite tricky circuit to mirror a neurone function (that is a behaviour that would correspond to the one of the neurone). Software, specifically neural networks, changed the landscape.
Neural networks, and more recently deep neural networks (that comprises several layers of processing, each one feeding the next AND providing a feedback to the previous layer influencing its further processing), have been created trying to mimic the working of the brain and we have now software that can learn, recognise patterns (sounds, images…), all characteristics of a brain.
The question is: how precise is the mirroring or, in other words, does a deep neural network respond to input signals in the same ways that a neurone would respond? Of course the answer depend on the sophistication of the neural network and researchers have been at work to find out what kind of complexity should a deep neural network have to respond like a neurone. Using a scientific parlance: How computationally complex is a single neurone?
That is the title of an article printed on Quanta Magazine reporting the result of a study by researchers at the Hebrew University of Jerusalem. They trained a deep neural network to mimic the responses of a single neurone. They discovered that an artificial deep neural network needs to have between 5 and 8 layers to mimic a single neurone computational behaviour / capability. That is way more complex than was thought before, when a 3 to 4 layer structure was deemed to be sufficient.
This is providing new insights on the complexity of our neurones and even more on the complexity of neural circuits in the brain not to mention the complexity of the overall brain, considering that it is more than billions of neurones: it includes astrocytes (approximately estimated in the same number of neurones, for the latest “count” look here) plus billions of molecules floating in the brain that influence the computation of neurones and the transmission of signals among neurones.
The real story of the way brains work has still to be written. The more we discover and the more it becomes clear that Nature’s 2 billion years of random attempts at reacting smartly to the environment has led to much more sophisticated systems than the ones we have managed to create in these last 70 years, and that’s understandable!