In this article

Tim Cook
Source: Apple

Apple released several new augmented reality tools and technologies for software makers during its annual WWDC conference this week. These technologies could be vital if Apple in fact releases an augmented reality headset or glasses in coming years.

Apple has never confirmed plans to release augmented reality hardware, but could reportedly announce a headset as soon as this year. Facebook, Snap, and Microsoft are also working on devices that can understand the world around them and display information in front of the user’s eyes.

In order to succeed with an augmented reality device, Apple will have to come up with strong reasons for people to use it — and that comes down to useful software, just as apps like Maps, Mail, YouTube, and the mobile Safari browser helped spur adoption of the original iPhone. Getting developers on board to build augmented reality software now increases the chance of one or more “killer apps” being available at launch.

Apple did not spend much time on augmented reality at its WWDC launch keynote on Monday, but announced several updates during the conference’s more technical parts shows that it remains an important long-term initiative for Apple. CEO Tim Cook has said AR is the “next big thing.”

“From a high level, this year, and maybe even next year’s WWDC event, will amount to a calm before an Apple innovation storm,” Loup Ventures founder and longtime Apple analyst Gene Munster wrote in an email this week. “Out of view today is Apple’s intense ongoing development related to new product categories around augmented reality wearables and transportation.”

What Apple announced

During the week-long conference, Apple briefed its developers on its rapidly improving tools that can make 3D models, use a device’s camera to understand hand gestures and body language, add quick AR experiences on the web, a heavily Apple-backed standard for 3D content, and an intriguing new sound technology that’s like surround sound for music or other audio.

Here are some of the AR announcements Apple made and how they are paving the road for its larger ambitions:

Object Capture. Apple has introduced application programming interfaces, or software tools, that will enable apps to create 3D models. 3D models are essential for AR, because they’re what the software places in the real world. If an app doesn’t have an accurately detailed file for a shoe, then it can’t use Apple’s machine vision software to place it on a table.

Object Capture isn’t an app. Instead, it’s a technology that allows a camera, like the iPhone’s camera, to take several photographs of an object, then stitch them together into a 3D model that can be used inside software in minutes. Previously, precise and pricey camera setups were required for detailed object scanning.

Eventually, third party developers like Unity, a top AR engine maker, will include it in their software. For now, it will likely be used heavily in e-commerce.

RealityKit 2. Object Capture is just one part of a significant update to RealityKit, which is its set of software tools for making AR experiences. Aside from Object Capture, there are a lot of little improvements to make app makers’ lives easier in RealityKit 2, including improved rendering options, a way to organize images and other assets, and new tools to make player-controlled characters inside augmented reality scenes.

Apple’s new city navigation feature in Apple Maps.
Apple

ARKit 5. ARKit is another set of software tools for making AR experiences, but is more closely focused on figuring out where to place digital objects in the real world. This is Apple’s fifth major version of the software since it first came out in 2017.

This year it includes something called “location anchors,” which means that software makers can program AR experiences pegged to map locations in in London, New York, Los Angeles, San Francisco, and a few other U.S. In a video session for developers, Apple said it is using the tool to create AR direction overlays in Apple Maps — a potentially useful scenario for a head-mounted AR device.

AI for understanding hands, people, and faces. While Apple’s machine learnings and artificial intelligence tools aren’t directly tied to augmented reality, they represent abilities that will be important for a computer interface that works in 3D spaces. Apple’s Vision framework software can be called by apps to detect people, faces, and poses through the iPhone’s camera. Apple’s computer vision software can now identify objects inside images, including text on signs, as well as the ability to search for things inside photos — like a dog or a friend.

Combined with Apple’s other tools, these AI tools can apply affects similar to Snap’s filters. One session at his year’s WWDC even goes into how it can identify how a hand is posed or moving, which lays the groundwork for advanced hand gestures, which are a big part of the interface in current AR headsets like Microsoft Hololens.