Artificial Intelligence has seen a number of cycles of enthusiasm followed by disillusion, with a major peak in interest in the sixties and a major trough in the eighties. Since the turn of the century interest in AI has kept growing fuelled by an approach that leverages on massive data availability and massive processing capabilities. In these last 5 years the interest is growing fast thanks to the application of AI.
The 2018 Artificial Intelligence Index report makes for a very interesting reading. It contains several graphs showing the status of AI research and adoption today versus the one of ten years and there is not just quite a growth but also a shift in focus. Most important, one can see that the whole context is growing in a synergistic way: applications of AI are growing, hence more jobs opening are becoming available and this pulls applications into AI education.
More students (and professors) in the AI area mean more papers being published, more attention drawn to AI and more capital fuelling AI start ups. It is the perfect storm that is now powering the rapid progress of AI.
This is the main difference with respect to the sixties when AI was a dream seeming within reach: being able to create a machine intelligence matching the human one. This proved difficult and the many avenues explored all led to stumbling blocks. Hence the disillusion of the eighties. Now we are seeing a renaissance of interest of AI no longer driven by a dream but by concrete applications.
We are not, yet, to the point of a machine matching human intelligence, although in some areas machines are better than humans, including areas like image recognition that seemed just too difficult for a machine. They haven’t reached AGI, Artificial Global Intelligence, but their intelligence is now good enough to be applied in effective ways to a variety of problems (and areas).
The pervasiveness of AI is what is actually driving its renaissance in this decade and this is creating an avalanche effect, the perfect storm I mention before, leading to an exponential growth that eventually will lead to AGI. I don’t think we will find ourselves at a time point where we will say: now we have AGI, rather we will turn back and we will see that over the span of some ten years machines have become able to do what we do in so many areas that in the end we will recognise they have achieved AGI.
It is a matter of prevalence and coverage rather than the crossing of a line. I am on average able to do what most people on average are doing, like reading and understanding a book, but of course there will be someone (many actually) that are better at reading books than I am, I can run a bit like the average people can do but of course there are many that can run longer and faster than me…
This is similar for machine intelligence: when a machine can be au pair with our average intelligence we can feel it au pair with us, but of course there might be a few (many) people smarter in a certain area as well as a few (many) machines smarter in another area.
Although it seems perfectly reasonable to talk about human intelligence there is not a clear boundary with which there is human intelligence and outside there is not. Likewise for machines. So let’s prepare for a time when we will have to face smart as well as stupid machines and of course debate (along with them) what is actually smart and what is stupid.