Liberating stranded data via the IIoT

Emerson

By Emerson Automation Solutions
Monday, 12 February, 2024


Liberating stranded data via the IIoT

Modern edge-to-cloud IIoT solutions can make it easier to access and use stranded data.

Automation systems are in service almost everywhere, controlling and monitoring machines, industrial manufacturing, infrastructure, and other facilities. This activity generates volumes of data that could be used to improve operations and business outcomes — but only if the data can be accessed, managed and analysed effectively. Unfortunately, this valuable stranded data often remains inaccessible for many technical and commercial reasons.

Newer architectures are changing this predicament by combining flexible and capable edge computing with a cloud computing model, making it not only feasible, but actually practical, to analyse this data, gain new insights and to make the results available to users.

How traditional infrastructures strand data

Until recent years, most manufacturing data was sourced from PLCs, HMIs, SCADA and historian systems running in the OT domain. These systems are built using industrial automation products, which maintain a tight focus on achieving control, visibility, production and performance to maximise operational efficiency and uptime. As such, accessing and analysing the associated data beyond immediate production goals was a secondary concern.

Designers architected and scaled the OT infrastructure correspondingly, leading to design choices like:

  • selecting proprietary protocols meeting performance requirements, but without supporting flexibility and cross-vendor interoperability;
  • minimising control and sensor data collected to maximise system reliability and simplicity;
  • implementing localised on-premise architectures to minimise cybersecurity threats;
  • vendor lockout schemes to protect intellectual property and promote reliable machine operation, often at the expense of connectivity.
     

While within the OT environment these data sources appear to be open, they are still quite difficult to access for applications outside of the OT environment where the data could be more easily analysed. In addition, many potentially valuable sources of data — such as environmental conditions, condition-monitoring information and utility consumption — are simply not needed for production or equipment control and are therefore ignored or not collected by these automation systems.

Traditionally, industrial automation systems were installed strictly on-premises and largely constrained by the specific hardware and software technologies available. OT devices were often installed with ‘just enough’ processing power and therefore lacked the necessary computing and storage elasticity to deliver on the full potential of the data often desired with the new class of analysis tools. Most of the familiar OT-centric software is not designed from the ground up for cloud connectivity, and certain features, and even basic security requirements, may be implemented inconsistently as add-ons. In many cases, IT-centric software is not attuned to the always-on, low-latency and high-volume data needs of an OT environment, and fails to offer a broad set of vendor connectivity controls. Security as an afterthought is not good enough; it must be built-in at all levels of the solution.

Pulling together ill-suited software to create an IIoT solution is problematic, can require kludgy custom scripts and is hard to support. With these needs and challenges in mind, some users may question how to recognise stranded data so they can begin taking steps to liberate it.

Types of stranded data

Stranded data exists in many forms. It originates at machines, the factory floor and other systems throughout a facility, often as part of the OT but also associated with other support and utility systems. This data can be as granular as a single temperature reading, or as extensive as a historical data log identifying the number of times an operator acknowledged an alarm.

The variety of data types, source devices and communications protocols adds to the difficulty of building a comprehensive solution. Below are five categories of stranded data sources.

Isolated assets

Isolated assets are those within a facility with no network access to any OT or IT system. This is the most straightforward case, but not necessarily the easiest to solve. Consider a standalone temperature transmitter with 4-20 mA connectivity or even Modbus capability. It needs to connect with some type of edge device — PLC, edge controller, gateway or other — to make this data stream accessible. In many cases, such data is not critical to machine control, so it is not available through traditional legacy PLC/SCADA data sources.

Ignored assets

Ignored assets are connected with OT systems and generating data, but that data is not being consumed. Many intelligent edge devices provide basic and extended data. A smart power monitor can easily provide basic information like volts, amps, kilowatts and kilowatt hours using hardwired or industrial communication protocols, but deeper data sets, such as total harmonic distortion (THD), may not be transmitted. The data is there, just never accessed.

Under-sampled assets

Some assets generate data but are sampled at an insufficient data rate. Even when a smart device is supplying data to supervisory systems via some type of communication bus, the sampling rate may be too low, the latency too great or the data set so large that the results are not obtained in a usable fashion. Sometimes, the data may be summarised before it gets published, resulting in a loss of fidelity of the data.

Inaccessible assets

There may be assets generating data (often non-process, yet still important for things like diagnostics), but in a generally inaccessible format or not available via traditional industrial systems. Some smart devices have on-board data like error logs that may not be communicated via standard bus protocols but nonetheless would be very useful when analysing events that resulted in downtime.

Non-digitalised assets

An example of non-digitalised data is personnel generating data manually on paper, clipboards, whiteboards and the like, which misses the opportunity to capture this information digitally. For many companies, workers complete test and inspection forms and other similar documents in a physical paper format, without any provisions for integrating this information with digital records.

Whose job is the IIoT?

Industrial operations — especially in remote locations — typically have limited personnel available for administering and managing specialised OT/IT systems.

Another challenge revolves around which parties ‘own’ and ‘need’ the data. OT personnel administer the PLC/HMI/SCADA systems, while enterprise and IT personnel have a greater interest in carrying out IIoT initiatives and have the knowledge to manage large data warehouses or data lake projects.

Companies need a way to coordinate OT and IT efforts. Normally, the experts who understand the IT cloud infrastructure and have good ideas on what to do with available data lack the expertise to locate and interpret OT-sourced data, connect to it, and bring it to the cloud computing platforms. On the other hand, the people that well understand the OT data and connectivity challenges generally don’t understand the cloud infrastructure or the potential this infrastructure can have. Systems making both the data access and cloud transfer simple and easy to set up are a critical missing link. While traditional automation platforms with the right features certainly play an important role in IIoT data projects, it is equally important to consider including technologies that can form an IIoT data gathering path parallel to the existing automation.

Gaining value from edge-sourced data in the cloud

Connecting stranded data, especially to high-level on-site and cloud-based enterprise IT systems, is needed so that the many types of edge data can be historised and analysed to achieve deeper and longer-term analytical results, far beyond what is typically performed for near-term production-oriented goals. When an organisation — whether an end user or an OEM — can liberate stranded data from traditional data sources, and transmit this data to cloud-hosted applications and services, many possibilities are enabled, including AI model training for predictive diagnostics, proactive maintenance, long-term analytics, asset management, insights into production bottlenecks, efficiency and sustainability improvements.

Solving the challenges of stranded data, and effectively connecting edge data to the cloud where it can be analysed, is ultimately the purpose and strength of IIoT initiatives. IIoT solutions incorporate hardware technologies in the field, software running at both the edge and the cloud, and communications protocols, all effectively integrated and architected together to securely and efficiently transmit data for analysis and other uses.

Creating an edge solution

Digital transformation is necessary at the edge to liberate all forms of stranded data, and to transport this data to the cloud for analysis. Edge solutions can be an integral part of automation systems, or they can be installed in parallel to monitor data not needed by the automation systems. Many users prefer the latter approach because they can obtain the necessary data without impacting systems that are already operating in production. However, the key is that these new digital capabilities can connect with all previously identified forms of stranded data.

Edge connectivity solutions take many forms. Below are a few popular examples:

  • Compact or large PLCs ready to connect with industrial PCs (IPCs) running SCADA or edge software suites.
  • Edge controllers that are ‘edge-enabled’ running SCADA or edge software suites.
  • IPCs running SCADA or edge software suites.
     

Hardware deployed at the edge may need wired I/O, or industrial communication protocol capabilities, or both, to interact with all sources of edge data. Once the data is obtained, it may need to be pre-processed or at least organised by adding context. Finally, the data must be transmitted to higher-level systems using protocols like MQTT or OPC UA.

For best usability and to minimise any subsequent processing efforts (and errors), OT data must be cleaned, transformed, structured and conveyed with its context. This means including naming conventions, engineering units and scaling values so the data stream self-describes the content. A seamless solution is needed to transport data from edge sources to cloud computing resources so the enterprise can fully take advantage by transforming the data into actionable information. Sending individual sensor information directly to the cloud is a first step but is not in itself an optimal method, compared with routing data through a proper OT automation system or a parallel installation of an edge-capable solution that will help provide semantic meaning.

Maintaining context is particularly important in manufacturing environments where there are hundreds or thousands of discrete sensors monitoring and driving mechanical and physical machinery actions. Modern automation software systems help preserve the relative relationships and context. Today’s OT/IT standards are developing in a way that ensures the consistency and future flexibility of data and communications. It is important for any solution to be flexible yet standards-compliant — as opposed to custom setups that will be impossible to maintain long-term.

Connecting edge to cloud

Once an edge solution is in place and can get the data, the next step is to make it accessible to higher-level IT systems, with seamless communications to cloud-hosted software. With clean, structured data in proper context and readily accessible in the cloud, data scientists and other analytics experts can apply big data principles to gain new insights from the data.

With the cloud, users can store data on virtual storage, without having to worry about the threat of losing important information due to local PC or server problems, or the possibility of running out of storage. Users also gain the ability to share data with authorised people.

For these and other related reasons, a cloud architecture fits well with the needs of organisations when implementing IIoT data projects.

Technology to bridge data between the edge and the cloud

IIoT connectivity solutions come in many sizes; indeed, there is great flexibility in choosing a right-sized implementation for each application.

Modern SCADA and control systems are now incorporating IIoT and cloud connectivity, so that process control systems can perform edge computing and serve as robust data sources for the IIoT, moving beyond traditional HMI/SCADA and IIoT solutions by offering extensive openness, scalability, security and integrated connectivity.

In some cases, depending on the need, the software can be hosted on an edge controller, a site-located PC or server, or a cloud computing resource. And because the software runs on an operating system like Linux or Windows, it is possible to scale application deployments from small, embedded edge devices up to larger server-based systems.

Figure 1: In some cases integration between on-premise systems and the cloud can be acheived with a single controller.

Figure 1: In some cases integration between on-premise systems and the cloud can be achieved with a single controller.

Built-in data routing and gateway functions make it easy to supply data to cloud and other IT systems. The technology makes it easy to securely collect and publish data on the cloud, manage business information flows towards ERP/MES business managerial systems or simply connect field devices to software applications.

Another way is to use IIoT connectivity technology that runs in parallel to an existing PLC/SCADA system, facilitating the execution of data gathering, analytics and other services, running on edge hardware. Users can create digital transformation projects for monitoring machine health and conditions, energy efficiency tracking, throughput improvements and other uses.

Modern automation software connects OT and IT

Stranded data is an all-too-common reality at manufacturing sites and production facilities everywhere. It is the unfortunate result of legacy technologies incapable of handling the data, and traditional design philosophies focusing on basic functionality at the expense of data connectivity. Only recently has the importance and value of big data analytics become mainstream, so end users are working to build this capability into new systems and add it to existing operations. Edge-to-cloud data connectivity delivers value in many forms of visualisation, logging, processing and deeper analysis.

Any IIoT solution for bridging data between OT and IT relies on digital capabilities that can interface with traditional automation elements like PLCs, or can connect directly to the data sources in parallel to any existing systems. These edge resources must be able to preprocess the data to a degree and add context, and then transmit it up to cloud systems for further analysis.

Top image credit: iStock.com/sasha85ru

Related Articles

Mineral processing: a eulogy for analog

Leading mines have already accomplished an automated, digitally connected mine and are reaping...

What is TSN and do we really need it?

Whether or not TSN becomes an industry-wide standard remains to be seen.

AI and condition monitoring

The rise of artificial intelligence is seen as particularly useful in the field of condition...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd