Home / Blog / Post-Pandemic Scenarios – VII

Post-Pandemic Scenarios – VII

 

The Vuzix M400 add on transform normal glasses into smart glasses to support a variety of application fields, like telemedicine provided by a practitioner, nurse at home, guided by a doctor from remote. Image credit: Vuzix

Senses Augmentation

We already have a number of “tools” that let us explore the “reality” out there beyond the possibility offered by our senses. Think about the telescopes that have opened up the vision of other galaxies, and even better think about the radio telescope that has allowed us to look at the cosmos through electromagnetic radiation that is beyond our senses detection capabilities.

Of course, these tools are not for every person, nor for everyday use. More recently cameras detecting infrared wavelengths, and glasses have augmented our vision (although they are designed, and used, in very specific applications -like firefighting).

A variety of augmented reality glasses is available, allowing the over-position of digital information on the ambient perceived by our eyes. They usually come associated with microphone and earplugs to extend our aural perception, or to feed information through sound.

The FTI’s report foresees a transition

from hands-on to heads-up mobile computing

in other words the fading away of the smartphone to be replaced by other devices that will seamlessly interact with our senses without requiring our “attention”. Communications and awareness will be achieved through our senses augmentation. That is quite a statement!

Interestingly, the report makes a point in suggesting that the transition has already begun in the form of smart glasses that provide augmented “sound”, not vision. They are referring to the Amazon Echo Frames, released on the market at the end of 2020. These glasses are sold with non graduated plastic lenses that one can easily replace with graduated ones from any optometrist. They embed microphones and speakers keeping you, seamlessly, in touch with Alexa. It is just a tiny steps towards augmentation but it is an interesting one because this is achieved seamlessly.

In this decade, according to the report, we will be seeing an increase in devices performance with smart eyeglasses becoming better and better in displaying artificial images overlapping them on the ambient (AI will play a crucial role in this area) whilst Head Mounted Display -HMD- will become lighter, easier to wear and easier to adapt (today they can create dizziness…). This might result in a convergence of the two by the end of this decade. There are several players chasing smart eyeglasses as the future replacement of smartphones, Apple has been rumoured to be one of them.

Again, let me stress that the keyword is “seamless”: these devices need to be seamless in:

  • wearing – you should not perceive them
  • cost – the cost should be aligned with a cost of a (good) smartphone
  • intrusiveness – other people should not see you as a cyborg

These devices are creating, or let the wearer enter into, a new reality. This is an issue per sé: our social life is based on the assumption that we are all living, and perceiving, the same reality, this is what make social life possible. We are already seeing the impact to social interaction when we interact: our culture, roots, education -you name it- has a profound impact on our interpretation of reality. Now imagine interaction when the perceived reality is mediated by devices. There will be people using them and people who will not use them, among those using them the software will differ and more than that, it is most likely that the software (possible mediated by each person’s digital twin) can create different realities, better fitting that specific person but at the same time creating a different perception from the one created in another person.

This evolution is another example of a two edged sword: lot of good stuff along with lots of new issues. However, another interesting point raised in the report is the possibility to see (perceive) the world with the eyes of other species. Augmented glasses and software could be triggered to capture the same wavelength of a bee, or to see sound waves (like a bat) or an electromagnetic field in the radio waves range, (like a shark) or more simply a subset of our eyes capabilities to see with the eyes of our pet… This will bring a new form of empathy, according to the report, letting us experience the world putting ourselves in “the shoes” of a different species.

It is not just augmented glasses or augmented hearing: holograms have made significant progress with the possibility to see life-size person by using a phone booth like setting (watch the second clip, it is impressive!). It will take few more years for hologram tech to become affordable and, here again, seamless.

“Spatial display” is a another technology that is emerging. Sony is pioneering this technology: it can provide a really good 3D spatial perception. The drawback is that in order to deliver a stereoscopic image the system requires to track the eyes of the viewer: hence it can work for one viewer at a time.

We are already used to panorama videos, videos in which you can look around by moving your mouse, or just your phone (that picks up spatial information from the embedded gyroscope): 360° videos will become much more common as 360° video cameras become affordable and as easy to use as our smartphone camera. Actually, whilst today’s a special optics is required in the coming years normal wide angle lenses, placed on the four edges of the smartphone (we already got two of them) will be used and software will take care of stitching the four streams into a single 360° video streaming.  In addition volumetric video will become affordable (also thanks to LIDAR and AI) and will be combined with 360° videos to deliver an immersive experience (in 360° video you cal look around you, in a volumetric video you can move around an object present in the video to see it from different perspectives). Expect these high processing demanding technologies to become available in your hands by the end of this decade.

According to the report we can expect by the end of this decade to see a significant portion of the web content becoming available in AR and VR form. This will transform our web experience from a “looking at it” into “becoming part of it”. Spatial audio, relating the sound to both its origing and your position will further improve the perception of “being there”. This is what is starting to be known as life in a metaverse, digital reality becoming reality.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.