This technology lets us simulate 'what ifs'
In this interview, Frédéric Kaplan – the head of EPFL’s Digital Humanities Laboratory – talks with us about the current focus of his work: digital twins and the so-called “mirror world,” or the virtualization of reality.
What is a digital twin?
A digital twin is a virtual double of something that exists in the real world. It can be an object, a machine, a city or even an entire country. Sometimes, the term is also applied to abstract processes such as production planning. Put simply, it’s a model containing data on all of an object’s past “states,” plus a set of operations and rules to simulate its behavior. It’s helpful to think of digital twins as a digital machine – one that’s both a data model and a simulation.
As far as possible, digital twins should be kept “in sync” with the physical world, using data from sensors and systems that capture information about the object being modeled. The massive expansion in bandwidth, especially for mobile devices, is opening up incredible new possibilities on this front.
Another way we can use this technology is to model what might happen in the future – in other words, to simulate “what ifs.” These kinds of projections can prove invaluable when it comes to making decisions or shaping discussions in areas like architecture and urban planning.
When was the concept invented and for what purpose?
The digital twin concept has become increasingly popular over the past decade, but it’s been around for much longer. Digital twins had their first real-world application half a century ago with the Apollo 13 mission. When an explosion crippled the spacecraft’s capsule, the three astronauts stuck inside couldn’t see the damage that had been caused. So NASA engineers had to diagnose and resolve the issue from Earth, hundreds of thousands of miles away.
Thankfully, the ground team had simulators that allowed them to model the behavior of the capsule’s key systems. These simulators were controlled by a network of computers. For instance, there were four computers for the command module, and three more for the lunar module. Because these simulators could be synchronized with data coming from the spacecraft, they were effectively its digital twin. So keeping data flowing between the craft and the ground-based systems was absolutely essential.
It was largely thanks to these simulators that the engineers on the ground and in space were able to work together to diagnose the problem and, ultimately, bring the crew home safely. What makes this first application so remarkable is that it didn’t involve just one digital twin, but rather a network of digital twins interacting with one another, each modeling the spacecraft’s behavior using a different simulation system.
The digital twin concept as we understand it today has its roots in a 1991 book by David Gelernter. In it, he talks about a “mirror world,” where technology is used to build a virtual model of a city – an entire world, even – complete with interacting digital twins. Gelernter’s idea stood as an alternative to the web: not a set of interconnected documents, but a fully-fledged double of the real world.
Digital twins are a scaled-down version of Gelernter’s vision. The concept started to gain traction in the manufacturing industry in the 2000s, influenced in part by the work of Michael Grieves, who posited using connected technology to model and predict the behavior of physical objects. Basically, the idea is to take piece of machinery – or even an entire factory – and create a virtual double to monitor how it operates in real time. The resulting insights can prove useful across the object’s entire life cycle, from cradle to grave.
How are digital twins used today?
These days, digital twins have a wide range of applications, ranging from healthcare to smart cities. Historians are even using them to reshape how we think and talk about the past. Despite their differences, these fields all share something in common: they fit naturally into a 4D multi-scale representation of the world – a double containing its past, present and future states.
Where is further progress required?
One of the key issues is scale. Digital twins represent objects of all shapes and sizes: machines, buildings, neighborhoods, cities, regions and whole countries. And in each case, the modeling and simulation methods are different. The real challenge is combining these twins – each at a different scale – into a multi-scale system. What’s more, scale isn’t just a spatial thing. It has a temporal dimension too, because the input data comes from layers of operations and processes happening over markedly different time scales.
Are digital twins here to stay?
It’s been a bumpy ride for digital twins and the mirror world, with periods of great promise punctured by major setbacks. But in recent years, the march of digital technology has picked up pace. In many industries, process modeling has become almost as important as the processes themselves. Drones and self-driving cars don’t just deliver goods – they model their environments in real time, building a virtual picture of the world that’s edging ever closer to Gelernter’s vision. And the more they do this, the more efficient and autonomous they become. Globally, humanity is coming to recognize this positive feedback loop. The path ahead seems clear.
What are the current limitations of this technology?
The risk is that digital twins fail to achieve their core purpose: faithfully representing reality. If the map in your car’s GPS system is out of date, the algorithm will end up directing you the wrong way. If a simulator doesn’t accurately reflect how a machine works, the calculations will come out wrong and operators will make erroneous decisions. And if patients’ medical records are incomplete, healthcare system representations will be skewed.
Unfortunately, there’s more to it than merely keeping systems synchronized and models updated. There are deeper, more nuanced questions about simulating an increasingly complex world. Can the mirror world really reflect reality? If not, what phenomena can it not fully capture, and at what scales? The inexorable march of artificial intelligence is constantly pushing the boundaries of predictability. The flip side is that, in some cases, we’re losing our grasp on the processes underlying the models. Now more than ever, we need to ask ourselves whether we can really distill the world around us to a series of mathematical equations – in theory and in practice. And on a related note, these questions should prompt us to think about building models that allow for multiple, diverging pasts and futures, or perhaps even parallel universes.