Machine over man
Extending our sphere of knowledge by using machines as tools is a grand idea…as long as we’re on the winning side. But are we truly still in command of our intelligent machine creations? If the answer to this is “no,” we may find that we have a lot to lose.
That human jobs are being lost to machines is nothing surprising or new. The revolution in agriculture machine technologies has multiplied the yield making it possible to feed 7 billion people better on the average then 1 billion people were fed two centuries ago. There’s no going back now – if we were to turn off the machines that produce fertilizers and insecticides, and that make it possible for one man to do the harvesting work of 10, the penalties would be famine and quite possibly, mass human extermination.
If robots have eaten human jobs like candy, then autonomous vehicles could be like setting a glutton loose at an all-you-can-eat buffet. From this perspective, we’re losing to machines, because they perform better than we do at certain things. The digital transformation promises to be even worse. During the late industrial revolution, when machines stepped in to take our jobs, it was all about automation. With the digital revolution, jobs aren’t being replaced by machines; they’re simply disappearing. There’s no longer a need for automated paper shuffling because there’s no paper anymore.
Sure, AI can replace a few brains, however, more critically, it makes those brains redundant. Think about self-driving, autonomous vehicles. You no longer need operators, which equates to the potential loss of 3 million or more driving jobs in the U.S. alone, but you also no longer require that many vehicles. Today, we buy a car and then park it along the sidewalk or in a garage 90 percent of the time. With their commoditization, a mere 40 percent of today’s vehicles would be ample for satisfying all of our transportation needs. This translates into reduced manufacturing demand, along with fewer red lights, traffic signs, and police.
A world permeated by machines performing in the knowledge space, a world customized for machines, is quite different than the one we know now. The corollary is that we lose along several fronts, and they win.
But hang on a sec – it’s clear that in such a world, we would lose…but doesn’t winning require some sort of awareness or sentience? Would a machine ever be motivated by winning to pursue a strategy that would allow them to do so? It might seem far-fetched, but we’re now seeing the first examples of this motivation, leading machines to devise strategies for winning on their own.
For example, DeepMind’s AlphaGo did just that, leading to its defeat of world Go champion, Lee Sedol. I am also fairly certain that militaries worldwide are building smart weapon systems capable of pursuing their (assigned) goals using strategies of their own making. Bots being used in the financial markets are acquiring self-awareness, finding new means of meeting their “programmed” targets.
Now, we can take some comfort from the “programmed” part of this narrative; however, notice how I included “programmed” in parentheses. How long can we trust an intelligent autonomous system to play within human-programmed boundaries? This isn’t just about software bugs or hacking. The very concept of “autonomous and intelligent” means these machines are taking up lives of their own. And in many areas, a fully performing autonomous system needs leeway to wholly exploit its intelligence and logic capabilities.
It‘s a sort of catch-22 situation – the more intelligence and autonomy you allow a machine, the better it will perform and the more useful it will be. But at the same time, the more leeway the less control you have.
As noted at the start of this piece, we’ve already reached a turning point where humans alone are no longer able to extract all potential value from cyberspace; we can do that only through the use of machines. For machines to be able do that, they need to be, well, better than us. So we’re already in that catch-22 situation.
Man and machines together
So far, the division between humans and machines has been clear – I’m here, the machine is there – but that boundary is getting fuzzier. Smart prosthetics fuse seamlessly with our bodies, making up for lost limbs or providing additional strength, stability, or resilience, as seen in exoskeletons donned by assembly line workers.
We use our smartphones symbiotically, but what if they were integrated directly into our bodies? Think a smartphone in the form of a contact lens capable of transparently delivering augmented reality images straight to the brain. Think it sounds like science fiction? Think again. The first prototypes have already been built.
Soon, brain-computer interfaces could become seamless as well, creating a new synergistic relationship between the cloud and us. At that point, the question of “who knows what” would be moot; you ask me a question and I know the answer. Sometimes that answer will be stored in my own neural circuitry, but most of the time it would come from the connection of my neurons to the web.
Of course the real problem is not about where the knowledge is stored, as long as it is seamlessly accessible. The real problem lies in where the decision-making process takes place! The answer to this is very complex; it’s already an issue today and truthfully, it has been an issue for centuries.
Think about it. Our brain decision process is influenced by the way it has been “educated” by the cultural context. These external factors are influencing our decision processes to the point that in certain situations, we can legitimately claim that influence has been so strong that our brains can’t be held accountable for the choices made.
The point I’m trying to make is that we humans are in symbiosis with our cultural environment, and the tools – both physical and conceptual – that we have been taught to use.
In a way, it will be no different in the coming decades. Our context will change, becoming permeated by intelligent machines. Much like we do today with our fellow humans, we’ll have to contend with and negotiate our decisions with these smarter mechanized constructs. And just like with humans, there will advantages and disadvantages alike.
My guess is that the transformation will be subtle. We’ll neither realize it is happening, nor that it has happened. How deeply do we comprehend that our decision to buy a certain brand of toothpaste was influenced by a commercial that we saw a month ago and have since forgotten?
This isn’t a win or lose situation. We’re going to wind up as a partner to our smarter machines, and that partnership will be fostered by our augmentation through technology. Machines will play an essential role in this augmentation and, as with any successful technology, they will fall below our level of perception. In the end, the revolution will be silent and invisible.
- Philosophers have been debating on “free will” for millennia. In the future the debate will have to take into consideration machines as well, specifically the ones in symbioses with us
- As intelligence is likely to be shared (in a way we area already facing this problem as we confront fake news) what happens to responsibility and accountability? When machines are involved today in a decision process (e.g. in landing an aircraft under IFR) and it fails we look for the human responsibility (who checked the autopilot last, who designed it, who maintained the radio forming the gliding path…). Where are we going to look when machines will be “self-designed”? The human factor may vanish from the equation.
- Emotions are a human characteristics, or so we say. Yet, we know that our emotions are conditioned, sometimes even created, by chemicals to the point of being outside of our control. Deep Brain Stimulation may become, along with other technologies yet to be invented, a way for machines to become symbiotic with us. At that point would we fall in love as result of a machine deciding that is the right thing to do?
- Transhumanism will leverage, it will happen, on human machine symbioses. Part of this symbioses may occur before the birth of the human 2.0, as result of the redesign of the genome by a machine, possibly the most effective way of creating a human-machine symbioses since, in a way, those humans will be partly machine as well.