Just after yesterday’s post I was pointed to a NYT article discussing the possible limits of our brain in terms of storage and processing (that would be computer geek parlance, in lay terms: its limits in remembering and understanding) and of ways to overcome those limits.
The article is titled “How to think outside of your brain” and this makes a clear connection to some of the points I raised on yesterday’s post: is artificial intelligence a “tool” that complement, extend our brain function?
The first point, of course, would be determine if our brain has limitations and I guess it is a moot question as such! Of course, as anything, a brain has limits in capacity: how much we can store and remember, how fast we can reason and come to a conclusion … We have some scientific data on the processing speed of the elemental parts of the brain, the neurone switching time (to move from one state to another, to become excited, to transmit a chemical/electrical signal…), the number of connections, the time it takes to transmit a signal from one neurone to another… Although these “data” do not really answer the question they make clear that limits exist. The point is that brains are all alike in terms of basic component (neurones and nerves/dendrites-axons) but they are quite different in circuitry: the brain of a professional basketball player takes a fraction of the time (and energy) used by my brain to decide how to instruct the muscles to launch the ball to the hoop, thanks to a different circuitry that has evolved through practice and skill. This differences in “acquired circuitry” makes quite difficult to determine the computational limit of a “brain”. On the other hand given the underlying limitations of its components we can be sure that a limit exists.
A more interesting question would be if there are “qualitative” limits to a brain, i.e. can brains understand whatever aspects of reality or not? It is a much more difficult question and philosophers have been discussing it for over two thousands years. In the last centuries the development of the scientific reasoning and the amazing results that this has brought have shifted the balance towards “yes”. However cosmological questions (what was before the “big bang” -and even the accepted fact that we may never be able to know) and quantum mechanics defying our “reasoning” have shifted back to the answer towards “no”. Notice that I am using these two examples to highlight a difference between not being able to “know” (cosmological question) and not being able to “understand” (quantum mechanics). The first is due to the impossibility of getting data, the second to the impossibility of process them in a meaningful way, i.e. our brain is not able to process those data.
Could we rely on a different “processor” (AI) to find answers that our brain cannot reach because of its quantitative and qualitative limits? And if so, could we trust those answers?
Take, as an example, quantum mechanics. We know that it works because the equations we use deliver results that match experiments (and predict results of future experiment). Yet, at least in some aspects, we do not understand the “why”. Notice that we would not be able to “run” those quantum mechanics equations without computers that crunch terabytes of data… In this example I am using AI and data crunching both as a way to process a huge -beyond our capability- mass of data and to derive results, although we may not “understand” them. In this case we use a prosthetic to overcome both quantitative and qualitative limits.