How Google’s ARCore Depth API Tackles Occlusion with a Single Camera

ar artificial intelligence Apr 16, 2020

Google has taken augmented reality to a whole new level for perceiving 3D space and providing optimal AR experiences without bringing in the use of dedicated hardware like depth sensing cameras. Google’s ARCore[1] Depth API is capable of creating depth maps with just a single camera.

For those who don’t know, ARCore is Google’s platform for augmented reality runtime and an SDK for Android and iOS. With this, developers can easily do positional tracking, surface detection and various other estimations. When moving your phone with a single camera, Depth API [2] will take multiple pictures around you and then compare those images to estimate the distance from each pixel to create a 3-dimensional structure of your surroundings making it possible to project real-world instances of objects into the current frame.

ARCore visualizations

How the Magic Happens

Google says, “Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera [3]. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel [5] .” A depth-map basically lets you decide which object is near to you in such a way that you can estimate if the virtual object placed in the same environment will overlap or not and therefore, solving the problem of AR experiences for the overlapping of virtual objects on top of the real objects in the same environment.

The picture below explains the main concept behind ARCore Depth API. On the left side, a cat (digital object) overlaps both the sofa and floor (real-world objects) which ideally is not possible in real scenarios. The right side of the image uses depth API to overcome this issue and gives a feel of actually having a digital object in the real world.

The projections with real objects

The main features that stand out in the latest implementation are:

  • It does not require any special hardware such as cameras and sensors,
  • It works for a wide range of devices that support ARCore,
  • Devices with special featured cameras and time of flight sensors are more likely to get better and more accurate results along with a richer experience.

Supported Devices

Depth API does not just support single camera devices but also anticipates that time of flight [4] (ToF) sensors will enable better, faster and dynamic occlusion [6] which is capable of hiding digital object parts behind real objects. Depth API works as mentioned before without using conventional depth sensors which is the key to unlock vast computer vision applications. Many of the latest generations iPhones have used the front-facing depth sensors. Following the advancement, few of the recent Samsung devices have started to add these sensors. Due to its really advance provision, many other companies are expected to follow the trend soon.

Practical Applications

Google says, “We will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore enabled Android devices today.” Occlusion is nothing but the ability of Digital world objects to define accurate space and blend around real-world objects i.e. virtual objects will appear behind real objects no matter if the real object is closer to the camera.

Houzz (a company that is based on home renovation and interior design) has used Google’s ARCore depth API for building an app that provides “View in My Room 3D” features to their customers [8]. Sally Huang (Visual Technologies Lead at Houzz) said “Using the ARCore Depth API, people can see a more realistic preview of the products they’re about to buy, visualizing our 3D models right next to the existing furniture in a room.

Doing this gives our users much more confidence in their purchasing decisions.” Below is the sample gif image of the Houzz app where customers can virtually place furniture and visualize it before bringing it as a real-world object. Here, the person taps in the area where he wishes to keep the table and moves the camera for visualizing it at various different angles. We see that the table does not overlap the chair which comes with the power of ARCore depth API.

The Depth Map

The Endless Implementations

Depth data is powerful for enabling the occlusion feature. Besides this, 3D depth data serves a lot more help to scientific areas such as surface interaction, path algorithms, realistic physics, etc. As well as to gaming industries, depth API provides interactive game mechanics that helps player hide behind real-world objects. Bringing these applications collectively in one place, you can create many more experiences in which objects bounce and splash accurately across the given environment.

The ARCore depth API is just a basic outcome of what’s possible with the depth API and Google plans to bring it out in a more planned and high tech way. Developers who wish to have access to depth API and contribute their innovative ideas to this feature need to register to Google's collaborators form [9].

 

References

  1. https://developers.google.com/ar/discover
  2. https://developers.google.com/depthmap-metadata
  3. https://en.wikipedia.org/wiki/Depth_map
  4. https://en.wikipedia.org/wiki/Time-of-flight_camera
  5. https://en.wikipedia.org/wiki/Ambient_occlusion
  6. https://developers.googleblog.com/2019/12/blending-realities-with-arcore-depth-api.html
  7. https://uploadvr.com/arcore-depth-api/
  8. https://www.androidauthority.com/arcore-depth-api-1064324/
  9. https://www.androidpolice.com/2019/12/09/arcore-depth-api/
  10. https://www.xda-developers.com/google-arcore-depth-api-create-depth-maps-using-single-camera/

 

Stay connected with news and updates!

Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.

We hate SPAM. We will never sell your information, for any reason.