AR has mostly been envisioned in smartglass-form, adapting more recently to greater-scale smartphones. But there are alternate modalities such as audio, and other hardware that will evolve into AR-enabled devices.
One such “mobile” hardware device is your car. Among lots of automotive-related activity coming out of CES, NVIDIA and WayRay are making a push towards in-car AR. So instead of glasses and smartphones, AR graphics will take over car windshields to inform or entertain passengers.
Going beyond existing heads-up displays (HUDs), true in-car AR makes sense for a few reasons. When autonomous vehicles reach ubiquity, an important question is what we’ll do in the car? AR (and VR for that matter) are good candidates for new utilities and media during that time.
More importantly, AR and autonomous driving share some technological underpinnings, so combining them in the same device (vehicle) makes sense. AR’s area mapping (the ‘M’ in SLAM) will benefit from the computer vision that self driving cars use to “see” surroundings.
In addition to NVIDIA, other CES activity points to data-informed vehicles that will dovetail with AR in this way. Ford and Qualcomm are working on better IoT connectivity, and Intel is creating road mapping data collection that could feed into the AR cloud (more on that in a bit).
Stepping back, AR will generally benefit from the technology being developed in autonomous vehicles. The R&D invested by a deep-pocketed and highly-motivated auto industry will refine the computer vision that can spin out and fuel AR’s area-mapping capabilities — in cars or in general.
As for actual in-car AR experiences, they could include windshield-projected information about routing, nearby attractions, educational information or other graphical “layers” that can be customized for passengers. That could be both manual customization or AI-fueled.
One challenge will be depth perception for graphical overlays. Given a moving vehicle and different points of view within the car, lightfield technology might be needed (e.g Avegant, Magic Leap) to highlight objects outside of the car. This is where it advances well beyond most HUDs.
The front-end UI could be some combination of graphical and audio. And the inputs will have to adapt to something that’s native and natural to an in-car context. That could mean simple gestural controls, or voice. Google is already planting seeds for the latter, as is Amazon.
Speaking of Google, there will be monetization potential when AR joins the in-car stack. If it can maintain its position at the front-end, a la Android Auto, Google can apply its algorithmic muscle to deliver info to yet another outlet. And some of that will be ad-supported (think: gas stations).
Head In the Cloud
The unsung hero in all of this will be the AR cloud. That will be the case with most AR, but is especially applicable when information has to be delivered to moving vehicles. The ability to deliver the right geo-tagged information and overlays will hinge on a robust cloud data bank.
For example, back to Google, in-car AR would tap several assets it’s assembled. Google Maps, Waze, Street View imagery, Google Lens and others will converge to form an informational backbone for in-car AR. The AR cloud could develop with a similar principle, but more open.
Of course in-car AR is a longer-term reality, but the vision is starting to materialize. We often talk about AR’s timeline, involving smartphones then smart glasses circa 2020 (potentially via Apple). In-car AR could be the evolutionary step that follows after that, or develops in parallel.
Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.