Home / Blog / AR now understands 3D objects

AR now understands 3D objects

The cat in these photos is an artefacts, a digital cat. With AR the cat is “pasted” on the upper layer of the image because, so far, AR didn’t understand depth. No more so. The cat in the second photo, still an artefact, is partly hidden behind the couch. Image credit: Google

Current software libraries for AR, the ones programmers are using to speed up their work, support the creation of a layer where one can insert artefact, digital entities, to overlay of real objects captured by, as an example, the camera of your smartphone.

You point your smartphone to a building and you see words becoming overlaid on it providing indication on the kind of shops you can find there, point to a restaurant menu in Japanese and your smartphone overlays the English translation, In many situations this works just fine. In others… no!

To transform the overlay of artefacts into actually placing them in the real world you need (the software needs) to understand the real world in 3D. Like in the photos above, the software needs to understand that there is a couch, a 3D object that can hide part of the cat. Indeed, if you look at the two photos, the first one feels strange, looks like the cat is flat, may be jumping from the couch. On the contrary the second one feel natural, the cat is partly hidden behind the couch.

This is now possible thanks to a new software released by Google as part if their ARCore API supporting augmented reality, called ARCore Depth API . You can see a few animations in the clip below, explaining the technology.

In order to be able to place an artefact in a way that makes sense, i.e. takes into account the physical objects in the real world, the software has to identify the objects and create a 3D map on their position. This is quite complex. I haven’t had the opportunity of playing with the new software so, as an example, I do not know if the software is able to detect object transparency, like a glass vase and alters the artefact behind the vase to take into account its optical properties, nor I know if the software can understand that you cannot squeeze a cat between a couch and the wall if the space between the two is just 2cm. I suspect that real life situations make the interplay of artefacts with the existing objects lay-out quite complex (like managing shades and reflection that lead to a change in the appearance of the artefact.

Nevertheless, this is a quite significant evolution bringing AR a bit closer to be perceived as a seamless part of reality.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.