Aaron Elster was a Holocaust surviver. He had a concern, expressed in several interviews, that once he and all the other who lived the Holocaust would have gone, the memory of that tragedy would fade away. Yes, it was engraved in books but one thing is to read about it in a book, quite another talking to a person who lived that nightmare. Now 75 years have gone by since the end of War World 2 and very few witnesses are left. In a few year there will be none. Last week Aaron was interviewed by Leslie Stahl in her 60 Minutes show. Fact is, Aaron passed away almost two years ago on April 12th 2018.
The Aaron that participated in Leslie show and answered her questions was an holographic image of Aaron that was created several years ago to interact with visitors of Illinois Holocaust museum. At that time Aaron answered some 2,000 questions on his life, his belief and of course his experience. Those answers can now be heard, and seen, by visitors of the museum (wathc the clip).
What Leslie did to host Aaron in her show was to use an AI program that could interpret her questions and search among the available answers the ones that would fit better.
I had the experience of a similar application when I was at the Future Center in Venice and we had showcased a virtual Einstein. The application used thousands of phrases on record that Einstein pronounced, or wrote during his life. Interpreting the question of the visitor the application was able to return, most of the time, a sentence that Einstein actually said, possibly in quite different context. The answer was given by a black and white image of Einstein face, animated to match the sentence. The effect, I should say was quite impressive as well.
Technology, and AI, makes it possible to outlive our body. In particular progress in AI can create a really credible version of ourselves in terms of interactions (the visual part is covered by holography or similar technologies). This is both exciting and scaring. It is exciting because we can live part of ourselves to the world (starting from relatives, friends, and on to include working teams). It is scaring because this disembodiment generate many questions and ethical issues:
- is that really me who will be interacting?
- what if that “avatar” of myself is going to say something that I would have never said?
- the “avatar” will start to diverge from “me” as soon as it starts interacting with other people since it will accrue experiences I never had and over time its “data base” will diverge from mine…
- should the “avatar” be grumpy as I am sometimes or should it always be nice?
What is happening is an acceleration with visualisation tech generating more and more credible images and AI managing the interaction in a way that is progressively undistinguishable from interacting with a real human (it is the Turing test fulfilled).
The advent of personal digital twins and their absorption in the digital thread of our life experiences will make this ever more “credible”, and because of that ever more ethically challenging.