At the time of the launch in February, Aveva value chain optimisation lead David Bleackley said: “Data is the new currency of the industrial world, and an effective predictive asset monitoring strategy is predicated on the ability to continuously transform massive amounts of sensor data into clear and actionable results. Operationalising a predictive monitoring program at scale has never been easier than it is now with Aveva’s PI System and latest predictive analytics software release.”
The idea is for the software to become integrated into the plant, according to Michael Reed, who runs the Aveva AI Centre of Excellence. “You want to incorporate this as part of the programme, not instead of the programme. Customers use our software because it works, and it’s incorporated into their different monitoring programmes. And the more they use it, the more confidence there is in the system.”
One main purpose of the system is to provide early warning of potential problems, he says, by matching the signal to known patterns of failure. “When we go live, we’re going to be taking live snapshots of the data as it’s occurring, and then trying to match that behaviour to known good behaviour from the database in real time. If it [matches], then we have a good correlation between current behaviour and past behaviour. But if we start seeing deviations from this with either individual sensors or overall, that is an indication of an early warning.”
The first thing the system does is identify that the signal is anomalous, and significant enough to take a look. The software contains a classification scheme of faults, and compares current behaviour against them. Reed compares that to a dashboard light coming up on a car, which indicates an issue. He says: “If you are driving and the light isn’t on, you are not typically worrying about whether the engine is working well.” But a light popping up focuses the mind on potential mechanical issues. You might notice that the oil temperature is high. At a garage, a technician will plug in a diagnostics system, which will indicate which sensors are out of range. This will show that discharge pressure is high and pressure at the injectors is low. Based on the technician’s understanding of how engines work, he or she would probably conclude that the fuel filter is clogged. This, in essence, is how Aveva predictive analytics works.
WHICH TO BUY?
Which of the many Aveva offerings customers choose “depend on where people are in their digital journey,” states Reed. Do they produce data but are not capturing it, or are they capturing data that they don’t know how to deal with? In either case, Aveva can examine a company’s pain points, for example operational efficiency, or sustainability issues such as water use or GHG. “Once you actually get this data store and start manipulating it, you can look for ways to reduce negative impacts and increase positive impacts. And that’s by utilising a multi-tool set on top of that base information.”
In a brief product run-through, then, the first step is ‘historianisation’ of the data (which is discussed in a minute), whether at the local level or at the enterprise (which is called PI Historian). That feeds into the base Aveva software, called Predictive Analyst (and previously Prism). It builds models. It mostly deals with reliability-based monitoring; sensor data. It will perform minor calculations, and compare inputs and the output will give situationally-based outcomes, not for fixed conditions but dynamically, depending on the operation.
The next step is condition-based monitoring data, which is layered on top of Historian. The next level is predictive analytics, which he describes as, “I can see where I am now, and I can put thresholds around. Now I want to see what is developing just in front of me, so I can start to take action if things are starting to happen before you reach the asset analytics thresholds.” That is predictive analytics, and it requires performance calculations.
Another product combines predictive analytics with dynamic simulation software, taking the outputs of the dynamic simulation and integrating them as inputs into a predictive model.
The final offering takes the asset performance optimisation software, previously called Romeo, and creates a digital twin model simulation and optimisation of it. Those outputs are then put into predictive analytics, to obtain predictive performance modelling and high dynamic range modelling, he says.
THE BUILD
If the Aveva offering is a house, it is built on a foundation of real-world data, from sensor data around a given asset, such as a gas turbine, a pump, a compressor or a boiler. “Over time, you build up all of the data and information about a piece of equipment during operational conditions,” says Reed. This database wants at least a year of historical data, processed using automated and manual means to identify areas of good behaviour in past performance.
While anyone can use software to look for correlations in this data, value occurs by knowing what to look for, and that’s where subject matter experts come in – and in knowing failure modes.
In the first instance, Aveva validates the model by testing it with past anomalies. As Reed explains, “We use a data tool, a predictive analytics tool, to play back the data against the model we created. Anything that we cleaned out of the model from past behaviour will show up as an anomaly. We can see the categorisation of those anomalies. Then we go back and talk to customers and say, ‘this looks like an IGV problem, this is a combustion issue; here’s a bearing problem’. They can look them up and verify.” If the customer can identify them, all well and good; the system has successfully identified an issue, and that is proof that it will catch it if that happens again.
A similar process is used afterwards, in normal operation, when it’s important to distinguish between an anomaly that is a one-time blip and one that indicates a bigger problem. Take, for example, Reed says, a case where temperatures are rising. Is it because the data was collected during a heat wave? Maybe the training data didn’t include operations running during conditions above 30ºC ambient: now that the thermometer has reached 35ºC, conditions have changed. “We will look at those conditions, and if they classify, we can live with them; they aren’t anomalous conditions, and we can then go back and retrain that into the model.” By contrast, where new problems crop up, the system can also create a new fault diagnostic to capture that failure.
In other words, the Aveva system constantly compares the current performance of the asset to the model of the asset. The system quantifies the fidelity of the match between observed and simulated data with a parameter called overall model residual, or OMR. Above a certain threshold, an alert goes off. “It’s akin to a dash light. If you know the OMR is off, you can look for sensors that are contributing to that variance. You can look at the behaviour of a sensor compared to that sensor’s prediction of where it should be.”
Every sensor has a preprocessor. If it is outside the range of what is expected (based on data from the historian), or beyond that which is measured as normal behaviour, users can flag it as faulty. They can also tell if it is flatlining, like a heartbeat. A recent feature of the system is that if a faulty sensor is identified, the software can pull its data out of the digital model to stop any skew or deviation. When the sensor’s behaviour returns to normal, the model automatically brings the data back in, saving users having to remember to do this.
Guiding users through the process is a fault diagnostics system, a repository of known failures. Reed says: “If I’m looking at a mechanical model, I’m looking at a bearing being hot, bearing vibration, a bearing failure problem, a lube oil supply problem; I’m looking for a thrust bearing problem or axial position issue. I’m looking at certain modes of failure and I can see where that pattern has matched, because [the value is declining] looking at the individual sensors, and I have a percentage of confidence that it’s matching this pattern. That fault diagnostic describes what this potential fault is and what the consequences are.”
Another element of the Aveva offering is the prescriptives section, which suggests what actions to take to solve a given problem. Reed explains: “It could be as easy as, check that the machinery is operating within its normal operational bounds; check the instruments are in calibration. Do a physical check. The actions get gradually more invasive. You might need to schedule a maintenance intervention at a certain point.”
KEEPING IT REAL
As sophisticated as these digital models are, they have an important connection to the real world, Reed states. “I was an operator in back in the early ‘90s, and when we did it, it was a lot of manual work: we got dirty, and we learned. That knowledge base, as we’ve aged, and come out of the workforce, has been replaced by a newer workforce that is more digital and less hands-on.” In its predictive maintenance software, Aveva claims to have captured that knowledge so less-experienced technicians have the benefit of the old hands.
BOX: USE CASE: ONTARIO POWER GENERATION
“We’ve just deployed the first set of online pattern recognition monitoring on our large transformers, so we can predict degradation while optimising performance and maintenance programmes. In the past, engineers would need to walk around to each transformer to manually download and analyse this data. These data sets are piped directly into our PI System network, so that we can build models using the information. We’ve also incorporated our own auto-diagnostic calculations using Aveva Predictive Analytics. This enables us to better understand and predict which failure mechanisms are occurring, which enables true condition-based maintenance,” says Nazgol Shahbandi, data scientist, OPG Monitoring & Diagnostic Centre.
BOX: SAD ORIGINS
In 2003, seven astronauts were killed when the space shuttle Columbia disintegrated on re-entry. Insulation foam shed from the external fuel tank during launch (circled in red) knocked off vital heat shielding on one wing (also circled). That damage ultimately destabilised the craft as it returned to Earth’s atmosphere. In the aftermath, NASA wanted to know if there was any way they could have known that it would have broken up. In response, engineers and technicians developed algorithms to detect these issues, in what Reed called ‘beautiful reports.’ These became the base algorithms feeding Aveva’s predictive analysis engines.