Digital twins are sometimes hyped to the point of impossibility, and the reality is often much more down-to-earth. While it may be suggested that a digital twin can model even the smallest detail of a product, in practice they are designed to model only the properties of interest, as efficiently as possible.
Digital twins are live, virtual representations of a physical product. They integrate multiple models and they are updated using sensor data to represent the current reality. Earlier modelling methods often meet only one of these conditions. For example, predictive maintenance updates machine models with vibration data, but only a single mechanical model is used.
Similarly, multiphysics modelling combines mechanical, fluid flow and electro-magnetic models, but these are not updated with real-time sensor data. Digital twins build on these methods, combining the integrated modelling of multiple elements and dynamics, with updates using sensor data.
The growth of Industry 4.0 technologies is facilitating sensor deployment using Internet of Things (IoT) devices. Digital twins often use empirical ‘black box’ models – which map inputs to outputs without modelling the underlying physics. Big data analytics and artificial intelligence can help to create these models.
BLACK BOXES
When modelling a simple mechanical system whose performance is fully understood, analytical equations may be used to transform inputs into outputs. This idea can be understood using a simple example of a gearbox for which the size of both gears are known. Dividing the size of the input gear by the output gear gives the gear ratio. Multiplying the input shaft speed by this ratio gives the output shaft speed.
Now suppose that we know the shafts are connected by some gears with a fixed ratio, but we don’t know the actual sizes of the gears or the ratio between them. In this case we know that the output shaft speed is equal to the input shaft speed multiplied by a constant factor ‘a’. If the input and output shaft speeds are measured at some point in time, it is then easy to directly calculate the gear ratio. This is a very simple example of using sensor data to update a model.
Now imagine that the mechanical system being modelled is a non-linear spring. The input is a force F and the output is a change in the length of the spring Δx. Since the spring is non-linear, this relationship is not characterised by a simple scaling factor. We could apply a number of different forces to the spring, measure the change in length, and plot this on a graph. It would then be possible to fit a curve to the data points, using regression to find a mathematical model that transforms the input force into an output displacement. Perhaps an elastomeric spring is being used, for which this function changes over time and which is also dependent on temperature. Continuous monitoring of force and displacement might be used to determine the correct spring curve in real-time. Plotting the parameters that determine the spring curve against the temperature could yield a more complex function that includes temperature.
The examples so far have used conventional mathematics such as simple algebra and linear regression. Models for simple systems can be updated in this way. However, for complex systems with many different input and output parameters, this approach can be very challenging. This is the type of problem that deep learning algorithms handle really well. Deep learning, and neural networks in general, can be thought of as regression but with more variables. If there are just two variables, such as a single input and single output, then it is easy to visualize the relationship as a curve on a graph. Standard candidate equations can then be selected for regression, such as a polynomial or exponential. If there are three variables it is also possible to visualise this, as a surface plot on a graph with three axes.
When there are larger numbers of inputs and outputs, but there is some knowledge of how they might be related, optimisation can be used. Sometimes there is no fundamental understanding of how the outputs are related to the inputs. There may also be very large numbers of variables involved, making it highly complex to create an analytical model. In these cases, deep learning algorithms can look at historical trends and predict the likely outputs for a given set of inputs – creating a truly black-box model. These use a type of artificial neural network, that mathematically mimics the neural networks found in biological brains. Sets of numerical inputs and outputs are linked by a network of connections.
Adjustable weightings at hidden nodes between the input and output layers control how easily information flows along a given pathway. Information is processed by the way it flows from the input nodes, through the weighted hidden nodes, to the outputs. Information is stored by adjusting the weightings of the nodes. This is generally referred to as ‘training the network’, rather than storing information, since specific data is not directly entered into the network. Instead, training algorithms automatically adjust nodes to strengthen a pathway when the network gives a correct output, and weaken it when the network makes a mistake. Training a deep learning algorithm requires many different examples of sets of inputs with corresponding known correct outputs. The more variables being considered, the more examples are required. This is a major limitation of deep learning – it only works when there are very large data sets available.
LINKING MODELS
Digital twins don’t just update models using sensor data, they also link multiple models together. Complex systems can be modelled by linking together models of individual components or machines. For example, models of bearings and motors are linked to model machines; models of machines are linked to model production lines; and these can be linked to create a digital twin of an entire factory. Similarly, models of building services can be linked to create a digital twin of a building, and building models can be connected with models of utilities and roads to form a digital twin of a whole city.
It’s important to only model and link the things that really matter. If you try to model everything, then it can quickly become very costly and time-consuming to run the complex simulations. (One way to mitigate this issue is to run a complex simulation a number of times with different input parameter values. The results can then be used as a lookup table that can efficiently produce the required output in real-time.)
EXAMPLES
In my last article on digital twins (see also https://is.gd/adeyid), I described a system developed by Iotics for the rail industry. It leveraged existing engine performance models to build a look-up table of engine performance with differing levels of air filter blockage. It then linked this to a simplified weather model that predicted pollen levels based on temperature, wind direction, pollen count, and dynamic position monitoring of train position in the network. They used this model to optimise scheduled maintenance, considering the location of trains at the end of each day.
Digital models of buildings and other structures are used to perform offline programming of inspection drones. However, these structures may change over time, meaning that the offline programs don’t always work correctly. If scan data from previous inspections is used to update the 3D model, this model provides much more useful information and can be considered a digital twin.