XR Talks is a weekly series that features the best presentations and educational videos from the XR universe. It includes embedded video, as well as narrative analysis and top takeaways. This week, we look back to AWE given its impending arrival next month (discount codes here).
AR’s charter is to fuse digital and physical worlds. And that goes mostly in one direction — rendering digital content on the physical. IOT does the opposite — bringing physical items into the digital realm. But the real magic is when you combine the two in a sort of virtuous cycle.
“The combination of IOT and AR… when you put those together and put them to work in an enterprise, very interesting things happen,” said PTC CEO Jim Heppelmann at June’s AWE conference. Though almost a year ago, his insights still have legs and remain relevant.
For example, in-car navigations systems force us to process signals from a 2D screen. That requires mentally translating those signals to physical actions in 3D space. The translation is known as cognitive load and the resulting action is cognitive distance. Both aren’t ideal.
“You have to memorize it, because you’re going to switch your gaze over to the physical world and try to interpret all that,” said Heppelmann. “So there’s a big problem of cognitive load and cognitive distance as you move between these modes of interacting with your car.”
AR is valuable in reducing that cognitive load and distance by rendering graphical directions in positionally accurate ways — like 3D wayfinding signals on a windshield. But AR can’t do that without content from the car’s diagnostics and navigational systems. In other words, IOT data.
“AR is really the counterpart to IOT,” he said. “AR isn’t very interesting without content to augment. And IOT isn’t useful if it produces complex information that people can’t interpret. But if you connect them together, amazing things happen in an ongoing circular flow of information.”
This applies beyond cars, Heppelmann says. Our homes are filled with appliances with varying levels of IOT data. That presents a challenge in the fragmentation of systems that aren’t interconnected nor speak the same language. So how do you bring AR into the picture?
“The amount of variability out there is just incredible,” said Heppelmann. “So how do you get data from many different things, process it and map it back on to many different things in a sort of AR experience? The answer is the digital twin.”
By digital twin, he means 3D models that can be ingested into AR systems, then projected outward by AR devices back onto the physical objects. When the digital twin and its physical counterpart are synced spatially, it can reduce the mental “translation,” and cognitive load.
“Any manufactured thing that’s come out of a factory in the last 20 years has a 3d model,” he said. “So we can have a digital understanding of any physical thing. We can take this data map it to the digital twin, then transpose it from the digital twin on to the physical counterpart.”
Of course, all of this takes on new meaning when you move from car navigation and kitchen appliances to enterprise manufacturing and maintenance. That’s where the digital twin function can really shine, given that reducing cognitive load means saving time and thus money.
But Heppelmann warns enterprises not to do AR for AR’s sake. Go into it with clear goals, which will not only inform the “if” but the “how,” including decisions about the various flavors of AR that will be most effective. This involves asking a series of questions (jump to that part in the video).
See the full address below, including a live demo of a maintenance process using a digital twin, which helps visualize the possibilities. You can jump straight to that part of the video here, and stay tuned for lots more written and multimedia insights as we keep a close eye on all of this.
Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.