Technology is neutral, it is neither good nor bad. It has become a sort of mantra for scientists and researchers to be able to explore any avenues, letting other people to decide on the use of the results of their endeavour.
Indeed, anything I can think of can have some upside and some downside. The point is to:
- foresee what the downsides might be (it is way more complicated than it may look at first glance)
- create a context in which those downsides can be controlled and avoided
It is often said that the first person who invented the boat also invented the shipwreck and the castaway. In a way it is true. At the same time we are better off after the invention of the boat.
Today we are creating “stuff” that makes more difficult to foresee the potential downside and even more difficult to control them. I am not talking of nuclear power nor weaponry (even though they rise tremendous issues). I am talking about software in general (with a focus on artificial intelligence) and genomic engineering. To a certain extent the two areas present similarities and are becoming more and more intertwined. Scientists are using lots of software, machine learning and AI, to sequence the genome and simulate effects of the variation of the genotype on the phenotype. Like artificial intelligence the genome takes over and it is difficult to predict where it will take us.
And, as technology gets better, the potential issues get worse.
OpenAI has created a text generator they say is so good that would be able to pass for a text written by a person. Now, in general this is good news: think of the many things you could do with such a program. You can have the program attached to a bot roaming the internet, reading articles in many languages from many newspapers and then preparing a summary that synthesises the gist. Rather than writing a document you might speak to this program providing it with the basics on the content you want to communicate and then let the program write down, in polished prose, the document. I should start right away with this blog, actually!
That program could be used to write brand new books, by knowing the type of story you like, or expect, just for you! That would be a revolution in books authoring.
At the same time, and this is the concern of OpenAI, that program can create biased articles, documents, filling them with some fake (but credible) news to make a point. Today there is the feeling that for whatever you read there is a person accountable (which does not necessarily mean that such person can be identified and held accountable). With text that is artificially generated it gets much more difficult to tie it to a source.
The problem, as I see it, is that even though OpenAI might keep their text generator under control and not release it, AI will continue to progress and sooner or later we will find smart text generators inside the Word or Page application we use, as we have a spelling checker embedded in them. We will probably love the possibility of writing some synopses or jot down a few point and see the application converting that into a document. At the same time, someone else might twist this nice feature into a fake news generator, wee will probably get more “engaging” scam mail with more credible scheme for making (losing actually) money…
As I mentioned, I see a parallel between AI and genomic engineering. Today I feel the former being more trickier since it has a lower entrance barrier. Having the possibility to leverage on a “second brain” might be a blessing, but it may turn out to be a curse.