Augmented Reality Extended: Developing Enterprise Applications with Zebra Mobile Computers

 

Augmented Reality Extended: Developing Enterprise Applications with Zebra Mobile Computers

 

By Pat Narendra, PhD

Innovator, Emerging Technologies

Zebra Technologies

Summary

Zebra’s TC52, TC57, TC72 and TC77 are Augmented Reality enabled as introduced in the companion Zebra YourEdge blog.  This opens a vista of new use cases across key enterprise verticals—retail, warehousing, manufacturing, transportation and logistics, and healthcare—that Zebra’s ISV partners and developers can leverage to grow their business. In this article we’ll introduce how to develop enterprise applications with augmented reality frameworks.

Zebra is working with its developer and customer ecosystems to help create impactful augmented reality (AR) applications for these markets. Zebra also offers webinars, APPFORUM sessions, and examples which illustrate Zebra verticals.

Smartphone AR – A Quick Review

Creating an AR experience in the “camera viewfinder” is to overlay virtual 3D objects in the field of view of the camera while projecting the virtual objects to the camera plane in real time and overlaying them on the camera video.

To do this, we need to:

  1. Develop a 3D model of the environment around the camera
  2. Track the camera pose (position and orientation) relative to this 3D model above.
  3. Anchor the virtual object to the 3D model in #1 above
  4. Render the virtual object to the camera plane every frame and superimpose on the camera feed.

The game changer in the recent smartphone AR frameworks is a mechanism to do #1 and #2 above with a single camera utilizing the device Inertial Measurement Unit (the combination of 3 axis accelerometer and 3 axis gyro). Let us briefly dive into how it works (caveat:  this is a just an intuitive explanation and implementation on different platforms will of course vary).

Imagine we had two stereoscopic cameras with a known baseline (distance between them) and relative orientation. By matching corresponding scene elements seen by the two cameras, we can get a depth map to various matched points (i.e., a point cloud) in the scene, using triangulation. Since this point cloud is nominally fixed in space, we can estimate the relative change in position and orientation of the camera at each instant.

But how is this done with just one camera? The answer lies in the gyroscope (which measures angular rotation rate around 3 axes) and the accelerometer (which measures the linear acceleration in three directions) at > 1000 samples per second. Now, if we took two video frames separated by, say, 1/10 of a second (3 frames apart), and assuming the camera has moved a few centimeters laterally in the meantime and did the stereo matching as above, we could get the depth map, right? But what about the unknown baseline (the distance between the two camera views) and orientation differences? Simplistically, we can reconstruct this baseline and alignment between the two looks using the accelerometer data (100 samples over the .1 second) (integrated twice) and gyro (integrated once). This is precisely why, in the beginning of every smartphone AR session, you are instructed to slowly wave the phone (in lateral motion) pointing at the ground and other objects until you get feedback that the device is discovering and mapping its environment (such as the ground plane). This is also why each individual phone needs to be precisely calibrated to account for the placement of the IMU sensors and the camera relative to one another.

Why is specific AR enablement necessary to support AR applications?

AR frameworks typically require calibration of the device’s sensors (camera and IMU). This is primarily related to sensitive motion tracking, which is done by combining the camera image and the motion sensor input to estimate the relative pose of the device over time.  Each device type undergoes testing for compliance on the IMU, camera and platform features, including the processor and memory. 

In typical smartphone AR frameworks, the heavy lifting of steps #1-4 above are implemented in the framework itself (usually as a service or a fragment which your app can extend/invoke). 

AR enabled Zebra Devices:

Now might be a good time to review the companion enterprise AR blog, where you will see a detailed overview of the use cases as well as the announcement of AR enablement on the Zebra portfolio.