The impressive evolution of artificial intelligence, evaluated in terms of areas of application and of its increasing “smartness”, is, mostly, the result of an increased processing power and an increased access to data. This represents, as previously mentioned, a departure from the idea of finding the key to intelligence in the brain and replicating it. Nevertheless, this evolution has been fuelled by mimicking some of the architecture of the brain (through neural networks) and the use of neuromorphic chips providing a more effective hardware underpinning to neural networks, being able to reconfigure themselves at hardware level (to a certain extent). Both have been applied to develop an autonomous learning capability, i.e. machine learning (I am simplifying, may be over-simplifying but the goal is to provide a mile high overview to understand where we are).artificial intelMachine learning is great and it is getting better. The basic idea is rather than writing software (rules like in Prolog and tree data structures in Lisp) one could write a software that a machine can use to learn on its own, provided it can get many data and interactions (these are essential to self-evaluate the progress made). Indeed, as many data are becoming available, both as data accessible in data bases (like historical records of games played by professional players, like Chess, Go,…) or through sensors (like data derived from computer vision) machine learning has a good starting point. On these data the software starts applying very coarse inferences and evaluates the results. By changing the inference rules and focussing on best results the software learns, not very much differently (as seen from the outside) to what happens for babies, children and –less so- with grown ups.
The advantage of machine learning over human learning is the processing speed and the capability to access huge data sets (hence tapping on more “experience” or/and gaining more experience). Notice that a software can clone itself as many times as needed and starts parallel computation. Here the availability of multi core and multi processors systems give a significant boost.
Is machine learning cheap? Well, not exactly! A recent article on MIT Technology Review, June 6th, 2019, points out that the training of a single AI model can lead to a CO2 emission equivalent to the total emission of 5 cars throughout their life cycle.
I am mentioning this to stress how the progress we are seeing in AI today are fuelled by the increased capability in number crunching and data access, all of which requires power. It is also a sobering reality check on how lousy we are at creating intelligence when comparing artificial intelligence with the brain. Our grey-machine is so much more efficient in making sense of the world and sparking intelligence!