Nobody really knows when machine intelligence will overcome human intelligence but most scientists and researchers agree that it will happen within this century (you are welcome to state your estimate in the comment area of this post).
Part of the difficulty in estimating a timeframe is due to the fact that “intelligence” is not well defined. Actually, most of the time you’ll find that the answer to “what is intelligence” is actually a list of types of intelligence (Howard Gardener, an American developmental psychologist, in 1983 described 9 types of intelligence: naturalist -nature smart, musical -sound smart, logical/mathematical – number/reasoning smart, existential -life smart, interpersonal -people smart, bodily/kinesthetic – body smart, and linguistic- word smart) which is probably just complicating the question.
Computers are not supposed to acquire all kinds of intelligence, although computers impersonating humans might need them all, and probably our competition with computer is not involving some emotional kind of intelligence (that may actually prove detrimental in certain situations).
So, if computer will really get smarter than us, what should we do (of course someone is saying that we mist stop them as long as we can…, but we can’t!)?
If you can’t beat them, join them!
This is basically the strategy I am proposing, and I am not alone. Actually, this might be the way to ensure humans will keep the upper hand.
We have got smarter/augmented through the use of the tools we invented. With computers we have become smarter also in the cognitive area, basically we have been using them in these last few years to augment our cognitive capability. As they became more performant so our cognitive capability has improved.
In the future we might aim at a symbiotic relation with computers as we are living in a symbiotic relation with our bacterial colonies. We can eat certain food because we hand over digestion to them… Couldn’t we augment our thinking/reasoning capabilities by off-loading some thinking and some reasoning to them?
In symbiotic systems intelligence becomes an emergent property, it is not “located” in any specific subsystem, even though some may contribute is some specific, recognisable ways.
If we can manage to create a status of symbioses rather than one of opposition with machine we are going to benefit.
Of course the issue is much more complex. As single individual we can be in symbioses with an ambient, with certain machines, but we cannot be in symbioses with everything. Will there be someone taking advantage of super intelligence to dominate those who cannot have access to it?
Besides, super intelligence is a fuzzy space: there is going to be some super intelligence that is superior to other super intelligence. And how can you measure which one is superior? You can see some immediate effects, but it may be difficult, if not impossible, to understand long terms effect. Imagine a situation where a super intelligence proposes a solution through re-engineering some bacteria to get rid of pollution. That’s great. And suppose another intelligence saying: no way, that is not good. May be it is not good because it would create a chain reaction that eventually will be harmful. But how can we know which is right if these are both beyond our intelligence level?
We are just starting to explore the various ethical questions on an unchartered path.