Apple’s Reportedly Throws in the Towel on AR Glasses

Reports from Taiwan indicate that Apple has disbanded their augmented reality hardware project

A recent report indicates that Apple has stopped work on an augmented reality headset. The news comes from the DigiTimes Taiwan (paywalled). The tone and accuracy of DigiTimes reports come from their journalists having access to information about Apple’s supply chain. This DigiTimes report stated that Apple disbanded the team building Apple’s AR headset for now, and speculated that the project may be totally kaput.

Apple never discusses the products they are working on, and maybe it’s a safer play when it comes to infant technology with an uncertain future like augmented reality glasses. Apple is an extremely profitable company with a huge amount of cash at its disposal. As such, it can buy up smaller companies whose work accelerates their own research and development projects. During the two intervals under the direction of Steve Jobs, Apple was known as a company with a reputation for delivering majority hit products, with a few misses now and again. Jobs was a product fanatic, and Tim Cook comes from the logistic world of supply chain management, where meeting deadlines is the standing order and high priority of the day. All this may mean is that Tim Cook would be more likely to hit a deadline with a slightly less-perfect product than Jobs would.

Tim Cook raised eyebrows last year when discussing augmented reality in a positive light, indicating that augmented reality headsets or glasses could be next in line as Apple’s next hit product. Apple also released ARKit, an API for developing augmented reality apps for iPhone. But Apple’s been working in several areas related to 3D technology as it pursues its next big product ideas. (Image courtesy of imore.com.)

Tim Cook raised eyebrows last year when discussing augmented reality in a positive light, indicating that augmented reality headsets or glasses could be next in line as Apple’s next hit product. Apple also released ARKit, an API for developing augmented reality apps for iPhone. But Apple’s been working in several areas related to 3D technology as it pursues its next big product ideas. (Image courtesy of imore.com.)

Apple Likely Concluded that the Technology Isn’t There Yet: Motion-to-Photon Latency Issues Are Perhaps the Biggest Stumbling Block for AR Devices

Motion-to-Photon latency refers to the amount of time needed for a user’s movement to be completely reflected on an augmented reality head-mounted display unit, like DAQRI Smart Glasses, or Microsoft HoloLens.

The way augmented reality headsets are currently designed involves some pretty fantastic engineering maneuvers to decrease overall motion-to-photon latency. To consider the problem, start with the fact that in augmented reality, digital content is overlaid by optical see-through (OST) systems onto the real world. Human perceptual systems of course do not have any latency whatsoever. Therefore, any overlaid virtual content in AR that does have latency is way more pronounced than latency in the fully occluded virtual environment of VR. If you tie this in with the fact that mobile AR headsets do not have the computing power of a tethered workstation or laptop, it’s simple to understand why latency is more of an engineering challenge for AR as opposed to VR. It’s a complicated challenge, but limitations caused by displays used by DAQRI and Microsoft are a key part of the problem.

  Liquid crystalline on a silicon wafer (LCoS) displays are currently used in Microsoft HoloLens and DAQRI Smart Glasses (pictured above and are only capable of showing one color at a time. To show a full RGB (Red, Green Blue) frame, the LCoS display matrix receives data for only the red pixel parts prior to turning on the red LED which shows the red image. Then the red LED is turned off, and the same process is repeated with blue and green, depending on the RGB data attributes. In order to refresh the display, complicated sequencing and “late-warping” techniques are needed to prevent unwanted artefacts and color slippage known as the “rainbow effect.” (Image courtesy of DAQRI.)

Liquid crystalline on a silicon wafer (LCoS) displays are currently used in Microsoft HoloLens and DAQRI Smart Glasses (pictured above and are only capable of showing one color at a time. To show a full RGB (Red, Green Blue) frame, the LCoS display matrix receives data for only the red pixel parts prior to turning on the red LED which shows the red image. Then the red LED is turned off, and the same process is repeated with blue and green, depending on the RGB data attributes. In order to refresh the display, complicated sequencing and “late-warping” techniques are needed to prevent unwanted artefacts and color slippage known as the “rainbow effect.” (Image courtesy of DAQRI.)

Engineering the correct balance between requirements like high throughput, low latency and low power are tricky enough, but the process of connecting a GPU to an LCoS display still requires proprietary display protocols. There are no standards, which would make component integration easier. Perhaps recent Apple acquisition Akonia has made strides in this area. But it’s hard to say without any real degree of certainty. Without standard GPU-to-LCoS protocols, proprietary extensions to improve mobile AR support from the GPU to the LCoS display will be necessary to improve motion-to-photon latency.

But a closer look at Akonia reveals research on what could turn out to be an entirely new type of augmented reality display screen, one based on transforming holography’s ability to store information into a better augmented reality lens, one that may be capable of meeting Apple’s high standards for product design.

Apple’s Acquisitions in the AR Space

In 2013 Apple bought PrimeSense, the company that developed software for the Kinect 3D depth sensor. Now iPhone X users have Face ID as a result. Face ID would likely be a key component to a working augmented reality wearable from Apple. (Image courtesy of Microsoft.)

In 2015, Apple acquired Metaio for its augmented reality SDK, which is now the basis of ARKit, Apple’s augmented reality API for iOS 11 and the impending iOS 12. (Image courtesy of Apple.)

In 2017, Apple increased the number of companies it purchased with potentially viable augmented reality assets. In one year, it acquired Vrvana for its manufacturing augmented reality head-mounted display capabilities, SensoMortoric Instruments for its eye-tracking software and hardware, Regain for its computer vision technology and InVisage Technologies for its quantum dot-based image sensor manufacturing technologies.

In 2018. Apple bought Akonia Holographics, acquiring its augmented reality display manufacturing capabilities. With Akonia, Apple now has access to its 200-plus patents that result in thin and transparent smart glass lenses with full color and wide field-of-view.

Apple is the world’s most valuable company with a market value that recently crossed over the trillion-dollar mark. They can buy startups and spend years in research and development, blowing through hundreds of millions to develop their next hit product. The iPhone, Apple’s flagship product, is over ten years old, and the Apple watch is not nearly the same type of cultural phenomenon as the iPhone. Just because Apple may have shelved their AR project for now, doesn’t mean they won’t pick it up later.

The company Apple most recently acquired that sent AR enthusiasts into a tizzy of speculation was started with a mission of using light to record information in a far more efficient manner.

Akonia developed a fascinating technique of recording bits: it starts with a single laser split into two beams. One is a data-carrying signal beam, and the other is a reference beam. A spatial light modulator then encodes data into the signal beam, converting electrical signals of 0s and 1s “into an optical checkerboard pattern of over one million light and dark bits.” When the signal beam and the reference beam intersect in its recording material a hologram is formed. Data is stored.

Similar to the way that a deep learning GPU uses it’s innate parallel-processing ability versus the single-threaded processing of a CPU to process many little calculations of math matrices simultaneously versus one at a time, holographic storage accomplishes faster transfer rates than traditional storage by virtue of its laser recording more than one million bits of data in a millisecond instead of the way conventional storage records one bit at a time.

It turns out that holography as a data storage technology has been promising for quite some time. But finding a storage medium was the real engineering challenge to overcome. So, what does this have to do with engineering bottlenecks with augmented reality?

Akonia’s “two-chemistry” photopolymer holographic storage material was created from decades of research and development. It yielded a highly photosensitive material, suitable for AR lenses, that scored high in optical flatness, stability and manufacturability. The company’s research and development has given it the ability to combine many holograms in a single layer of polymer for more control over the quality and size of images that are projected towards a user’s eye. Akonia dubbed the transformed holographic storage to augmented reality lens its “HoloMirror” technology.

If Akonia succeeded in getting its HoloMirror lens into a form that is easy to manufacture, inexpensive to sell (according to Apple’s standards), has a large field-of-view (perhaps 60-90 degrees) while incorporating the whole color spectrum with minimal motion-to-photon latency vis-a-vie a proprietary GPU-lens-connection that could serve as an industry standard, then perhaps an AR device could be produced to Apple’s high standards.

Bottom Line

Apple’s disbanding of the AR project is a good sign in a way: it demonstrates a clear point-of-view that AR glasses are not yet ready for prime time. The longstanding obstacles are challenging, but they do not seem insurmountable. An inability to synthesize the necessary components (things like 5G, a high Field-of-View and ultra-low latency) into a device worth bringing to market as the successor to the iPhone now does not mean that this will not happen a few years from now. Maybe there will be some combination of augmented reality and neural interface technology in five to ten years from now that will be worth bringing to market.