Analysing alarm and trend data: from insight to action
Thursday, 15 May, 2008
When analysing the cause of process disturbances or when trying to improve productivity in the process, the comparison of alarm and trend data can be quite revealing. While sophisticated analysis tools are available in data historians, an intuitive visualisation tool that sits in the SCADA system itself could deliver actionable insight to the operator faster and give the operator the immediate ability to modify process conditions for desired results.
Is the system self-defeating?
Process control and automation systems are only as useful as the ‘actionable insight’ they provide. This is because the collection of extensive data holds no meaning unless it is organised and accessible in a manner that delivers insight to the user. That much is obvious.
However, what is not so obvious is the manner in which users go about deriving ‘actionable insight’ from their process control systems. After all, the reality of the modern plant floor environment is that there are several barriers to actually finding answers to key questions.
To start with, the size and complexity of process control operations often mean that operators and process control/improvement engineers can sometimes get caught up in simply ‘managing’ (or coping with) the data overload, rather than analysing and deriving relevant insight from such data. Despite robust tracking and archiving tools, 'peeling off the layers' is at times too difficult a task for operators or engineers looking to identify and remedy a problem.
In effect, the data-rich nature of the SCADA environment tends to prevent users from leveraging the system to its maximum potential. Benefits that could be realised from even simple analyses are forfeited amidst the overwhelming volume of data collected and archived.
Most often it is not the lack of data that is the challenge, but the actual linking of diverse data points stored in different locations and in differing formats (that could, if put together, reveal underlying patterns and causes of problems).
It is all about the context
Comparing one set of alarm data with another set of alarm data can throw up useful insight, especially in the constant effort to minimise alarms or the impact of an alarm flood (an excessive number of alarms annunciated during a process disturbance). However, what is even more valuable is comparing alarm data with plant and process trend data.
This comparison helps in several ways:
- Identifying any ‘process drift’ towards an abnormality which could eventually lead to breakdown or process failure.
- Linking alarm spikes to specific process conditions, changes in instrumentation or new or changed control system configurations.
- Analysing operator response to alarms (and not simply focusing only on alarm data) to detect poor alarm system design.
- Isolating consequential/source alarms as well as nuisance alarms/'bad actors'.
And the twain shall never meet
To be able to compare alarm data with trend data, typically the operator or engineer has to locate alarm logs or printouts and then compare them with trend screens.
Traditionally, these sets of data have been located and displayed in different locations or formats. To draw useful insight, the operator or engineer must cross-reference between these two data sets by what can best be called an uncertain and unclear linking of cause and effect in their head.
Additionally, if the approach used is to dump alarm and trend data into an Excel worksheet and then study that for linkages, the process takes considerable time and is fraught with difficulty. In most cases, users have to resort to programming help to actually get a workable comparison of alarm and trend data.
While this may be just 'the way it is', it is clearly an example of the limitations of the ‘atomistic’ principle.
The 'atomistic' principle and problem solving
The atomistic principle, applied to problem solving, suggests that any working out of a problem can be reduced to steps or parts of the problem that can be computed independently of the problem as a whole.
This is in direct contrast to the well-known ‘Gestalt theory’ which emphasises the holistic approach to problem solving by humans and their self-organising approach that establishes links and ‘joins the dots’ between different components of a problem.
An everyday example of taking the atomistic approach is that of a child picking up a single piece of a jigsaw puzzle and trying unsuccessfully to understand its meaning in isolation. In contrast, a child taking the ‘Gestalt’ approach would place all jigsaw pieces out on the floor and then connect seemingly unrelated pieces to make a meaningful whole.
In a SCADA environment, the atomistic principle in action would place diverse sets of data in diverse locations and they would be viewed in diverse views or formats. A Gestalt approach, on the other hand, would try to overlay different sets of data in a holistic view to allow the operator or engineer to intuitively establish linkages and ‘join the dots’ of the problem.
In effect, the Gestalt approach places a higher value on the ability of the operator or engineer to establish context, identify linkages and get to the root of a problem.
'Joining the dots' in one view, on a single screen
Considering the difficulties of arriving at clear linkages between process conditions and alarms when such data is housed and viewed separately, it does make sense to seek a solution that can ‘join the dots’ in one view and on a single screen.
How does this help?
Operators and engineers can intuitively spot linkages when alarm and trend data are brought together in an integrated display. Apart from the analysis of process disturbances, alarm and trend data overlays can help greatly in improving productivity.
This means that despite the fact that an alarm trend overlay is typically helpful in post-event analysis, an effective visualisation of such an overlay can, when used appropriately, help operators or engineers ‘go forward’ by giving them some benchmarks or comparisons from historical events which they can use to know what to expect in terms of process parameters for upcoming process conditions. Therefore, by sliding forward in time and loading up trend tags with theoretical ‘future’ data (based on historical data culled from sliding back in time and real data collected in real time), operators and engineers can monitor target tags to see what the process variable ought to look like in upcoming conditions.
In SCADA systems with different clusters, an alarm trend overlay of one cluster can be used to benchmark or compare with that of other clusters to help in maintenance and even in expansion of the system.
If the visualisation tool is able to adjust various data sets for time zones and factor in daylight saving, then operators or engineers can compare different data sets for different machines, process lines or sites across time zones without the hassle of a potential time mismatch.
To some extent, even potential failures can be avoided if, when alerted to a potential problem, the operator can rapidly scroll back in time to see the alarm trend links of the recent past to spot anomalies or root causes that can then be acted upon before any escalation. This way, ‘troubleshooting’ takes on a much more proactive connotation than the typical scenario of having to react to a full-blown failure.
If the visualisation tool presents not only the alarm and trend data, but also the operator’s response to those alarms, then the analysis is fed with a whole new level of insight into operator effectiveness as well as areas of attention in system design.
Finally, if the visualisation tool has preconfigured templates, saved view or favourite view options, then operators do not have to spend time customising the most common views that they keep going back to each time.
From insight to action
Ultimately, an effective visualisation tool should make analysis easier, quicker and more intuitive for SCADA operators and process control engineers. And while some data historians (data repositories) do come with sophisticated analytic features, it is not often that a SCADA package itself provides such a tool.
Without having such a tool as part of a SCADA package, an analyst — removed from the actual plant floor environment — would typically analyse data from the historian and derive what he thinks is useful insight. Often it is the case that this insight is not filtered down to the operator to make it ‘actionable insight’. And if the operator does not have the same visualisation tool that the analyst has, the chances of both of them ‘understanding and being understood’ are lower.
Having the same visualisation tool that sits inside the SCADA package itself and is available to both operator and analyst provides them:
- First-hand visibility of alarm trend data.
- Immediacy of insight and control over what data to view or analyse.
- The ability to respond appropriately.
- The ability to reduce response time.
Thinking through the process challenges
In the complex and demanding environment that most SCADA system users operate in, real insight comes from asking and answering the right questions.
To summarise, the following questions may prove relevant:
- Is my process data organised, accessible and usable?
- When I say “accessible”, would that mean accessible to the operator on the plant floor?
- Can I compare alarm data with trend data in a single display?
- Can I use such a display to ‘time shift’ and derive insight that helps to debug the process or raise productivity?
- Can I make queries against the data without requiring programming help?
- Can I retain flexibility in the level of granularity I want for analysis?
- Can I pull up data on an operator’s response to alarms to add to the alarm trend overlays?
- How easily can such display data be exported and distributed to other stakeholders?
- Is there such a visualisation tool that resides in my SCADA package so that the operator has access to, and control over, the analysis?
- Would my team have access to tutorials or training videos to show us how best to leverage such a tool?
Finding answers to such questions will take operators and process control engineers down the happy path of leveraging process data more effectively and intuitively.
This article appears courtesy of Frost & Sullivan
Citect Pty Ltd
www.citect.com
Microgrids: moving towards climate change resilience
The benefits of microgrids go far beyond support during a natural disaster and can provide...
Good for today, ready for tomorrow: how the DCS is adapting to meet changing needs
The future DCS will be modular and offer a more digital experience with another level of...
Software-based process orchestration improves visibility at hydrogen facility
Toyota Australia implemented software-based process orchestration from Emerson at its Altona...