For several decades discussion has raged on the possibility to create intelligence by “duplicating” the brain. Proponents of this possibility assert that the connectivity structure of the brain results in an emergence of intelligence, hence if we were able to “copy” this connectivity we could create an “intelligent” machine. Out of this hypotheses stems the project “Connectome” aiming at mapping thee whole connectivity structure of the human brain. It is worth looking at their website, at the very minimum to see the many images representing some of the connectivity structure of a brain. I periodically browse their site and every time I am amazed by the structure we have in our brain. Both the technology used to identify the connectivity structure (watch the clip) and the resulting images are amazing.
The discussion rages because many others claim that the pure replication of that structure will not automatically create “intelligence”. Besides, each brain is unique, connections differs from one another, yet all these “brains” support intelligence. A few others, more pragmatically, make the point that even if the Connectome hypotheses is true, the structure complexity is beyond our capability of duplication, hence it is a useless endeavour.
Now for the first time there is evidence that structure is indeed a fundamental component of a “machine” -like the brain is seen from a mechanistic perspective- capability to generate intelligence.
Researchers at the McGill University have shown that a neuromorphic (modelling the real brain) neural network can perform cognitive tasks. The researchers have copied a (small) part of a human brain connectivity to shape the structure of an Artificial Neural Network – ANN. This ANN was trained to perform cognitive tasks (neural networks need to be trained, basically you provide them an input and “tell” them how good was the result of their processing. Based on this feedback the neural network “learns” to perform better and better, This training process is long and costly.
What researchers showed is that an ANN that has been structured like our brain connectivity performs better, i.e. can learn faster. In a way they managed to merge the connectome and the machine learning approches with very promising results. Connectomics is static, it is about connection structure, machine learning is dynamic and both are needed (besides, it has been shown that our brain changes its connections as it learns….).