Data integration for Industry 4.0: achieving open and standardised cloud connectivity

Beckhoff Automation Pty Ltd

By Sven Goldstein, TwinCAT Product Manager, Beckhoff Automation
Tuesday, 11 October, 2016


Data integration for Industry 4.0: achieving open and standardised cloud connectivity

In order to take advantage of the promise of Industry 4.0, one of the problems that must be solved is the integration of process data with IT systems.

As information technology and automation technology continue to converge, cloud-based communication and data services are increasingly used in industrial automation projects. Beyond the scope of conventional control tasks, applications such as big data, data mining and condition or power monitoring enable the implementation of superior, forward-looking automation solutions.

Industry 4.0 and Internet of Things (IoT) strategies place strict requirements on the networking and communication capabilities of devices and services. In the traditional communication pyramid point of view (Figure 1), large quantities of data must be exchanged between field-level sensors and higher-level layers in these implementations. However, horizontal communication between PLC control systems also plays a critical role in modern production facilities. PC-based control technologies provide universal capabilities for horizontal communication and have become an essential part of present-day automation projects exactly for this reason. Today, new IoT-compatible I/O components are becoming available, which enable easy-to-configure and seamless integration into public and private cloud applications.

Definition of business objectives for increasing the competitive edge

Industry 4.0 and IoT applications do not start with just the underlying technology. In reality, the work begins much earlier than this. It is critically important when implementing IoT projects to first examine the corporate business objectives, establishing the benefits to be gained as a company from such projects. From an automation provider perspective, there are two distinct categories of customers that can be defined: machine manufacturers and their end customers — in other words, the end users of the automated machines.

In the manufacturing sector in particular, there is an obvious interest in reducing in-house production costs, both through efficient and reliable production control and also by reducing the number of rejects produced. The traditional machine manufacturer pursues very similar objectives and above all is interested in reducing the cost of the machine while maintaining or even increasing production quality. Optimising the machine’s energy consumption and production cycles, as well as enabling predictive maintenance and fault diagnostics, can also be rewarding goals. The last two points in particular offer the machine manufacturer a solid basis to establish services that can be offered to end customers as an additional revenue stream. Of course, what both customer categories ultimately want is for the machine or product to be designed more attractively and to increase competitiveness in the marketplace.

Figure 1: Communication pyramid.

Figure 1: Communication pyramid.

Collecting, aggregating and analysing process data

The process data used during production provides a foundation for creating added value and for achieving the above-mentioned business objectives. This includes the machine values that are recorded by a sensor and transmitted via a fieldbus to the PLC. This data can be analysed directly on the controller for monitoring the status of a system using integrated condition monitoring libraries, thereby reducing downtime and maintenance costs.

However, where there are several distributed controllers in production areas, it may not be sufficient to analyse data from a single controller. The aggregated data from multiple or even all controllers in a production system or a specific machine type is often needed to perform sufficient data analysis and make an accurate analytical statement about the overall system. However, the corresponding IT infrastructure is required for this purpose. Previous implementations focused on the use of a central server system within the machine or corporate network that was equipped with data memory, often in the form of a database system. This allowed analysis software to access the aggregated data directly in the database in order to perform corresponding evaluations (Figure 2).

Although such an approach to realising data aggregation and analysis in production facilities certainly worked well, it presented a number of problems at the same time, since the required IT infrastructure had to be made available first. The fact that this gives rise to high hardware and software costs for the corresponding server system can be seen right away. However, the costs with respect to personnel should also not be overlooked: because of the increasing complexity involved in networking production systems, especially with large numbers of distributed production locations, skilled personnel are necessary to successfully perform the implementation in the first place. To complicate matters, the scalability of such a solution is very low. Ultimately, the physical limits of the server system are reached at some point, be it the amount of memory available, the CPU power or the performance and memory size required for analysis. This often resulted in more extensive, manual conversion work if systems had to be supplemented by new machines or controllers. At the end of the day, the central server system had to grow alongside in order to capably handle and process the additional data volume.

Figure 2: Analysis on the controller or server.

Figure 2: Analysis on the controller or server.

The path to the public cloud

Cloud-based communication and data services now avoid the aforementioned disadvantages by providing the user with an abstract view of the underlying hardware and software systems. ‘Abstract’ in this context means that a user does not have to give any thought to the respective server system when using a service. Rather, only the use of the respective services has to be considered. All maintenance and update work on the IT infrastructure is performed by the provider of a cloud system. Such cloud systems can be divided into public and private clouds.

The so-called public cloud service providers, such as Microsoft Azure or Amazon Web Services, provide users with a range of services from their own data centres. This starts with virtual machines, where the actual user has control of the operating system and the applications installed on it, and stretches to abstracted communication and data services, which can be integrated by the user in an application. The latter, for example, also includes access to machine learning algorithms, which can make predictions and perform classifications regarding specific data states on the basis of certain machine and production information. The algorithms obtain the necessary contents with the aid of the communication services.

Such communication services are usually based on communication protocols, which in turn are based on the publish/subscribe principle. This offers definite advantages from the resulting decoupling of all applications that communicate with one another. On one hand, the various communication participants no longer need to know each other — in other words, any time-consuming disclosure of address information is reduced. All applications communicate via the central cloud service. On the other hand, data communication with the cloud service, via the message broker (Figure 3), involves a purely outgoing communication connection from the perspective of the terminal device — regardless of whether data is sent (publish) or received (subscribe). The advantages this offers for configuring the IT infrastructure are immediately clear: no incoming communication connections have to be configured, eg, in firewalls or other network terminating devices. This significantly reduces IT infrastructure set-up time and maintenance costs. Transport protocols used for data communication, such as MQTT and AMQP, are exceptionally lean and standardised. In addition, various security mechanisms can be also anchored here, eg, encryption of data communication and authentication with respect to the message broker. The standardised communication protocol OPC UA has likewise recognised the added value of a publish/subscribe-based communication scenario and taken appropriate steps to integrate this communication principle in the specification. This means that an additional standard besides MQTT and AMQP is consequently available as a transport mechanism to the cloud.

The private cloud

However, such publish/subscribe mechanisms can not only be used in public cloud systems, they can also be used in the company or machine network. In the case of MQTT and AMQP, the infrastructure required for this purpose can be installed and made available easily on any PC in the form of a message broker. This means that both M2M scenarios can be implemented and any terminal devices, such as smartphones, can be connected to the controller. Moreover, access to these devices is further secured by means of firewall systems (Figure 3). The extensions of the OPC UA specification with regard to publish/subscribe will also simplify the configuration and use of 1:N communication scenarios within a machine network in the future.

Figure 3: Publish/subscribe communication in the machine network.

Figure 3: Publish/subscribe communication in the machine network.

Technologies for Industry 4.0 and IoT

Automation vendors are now beginning to provide users with components for simple and standardised integration into cloud-based communication and data services. For example, the IoT products within Beckhoff’s TwinCAT 3 automation software platform offer varied functionalities for exchanging process data by means of standardised publish/subscribe-based communication protocols and for accessing special data and communication services of public cloud service providers. Corresponding services can be hosted in public cloud systems, such as Microsoft Azure or Amazon Web Services, but can be used just as effectively in private cloud systems.

Using the standardised communication protocol OPC UA for data export means that data can be exported from multiple vendor technologies, and extended mechanisms — such as local buffering of I/O data in the case of an interrupted internet connection — need to be provided in the same way as a monitoring function for connected fieldbuses.

Analytics and machine learning

Once the data has been sent to a public or private cloud service, the next question is how the data can now to be processed. As previously mentioned, many public cloud providers offer various analytics and machine learning services that can be used for further examination of process data. Automation technology vendors may offer their own analytics platform for users to take advantage of, providing relevant mechanisms for data analysis and making it possible for all process-related machine data to be recorded in a precise and cyclical manner.

Depending on requirements, this data can either be stored for evaluation locally on the machine processor or within a public or private cloud solution.

Conclusion

Industry 4.0 and the IoT are on everyone’s minds. Likewise, these concepts are important when the realisation of innovative new business models is a requirement for the underlying infrastructure. This drives the increased convergence of IT and automation technologies. Cloud-based data services can help implement such automation projects, as they save the machine manufacturer or end customer from having to provide the corresponding IT expertise. Hardware and software services are now available for integrating such cloud-based data services quickly and easily into a control project and facilitating comprehensive analysis of the recorded process data.

Image credit: ©stock.adobe.com/Kadmy

Related Articles

Mineral processing: a eulogy for analog

Leading mines have already accomplished an automated, digitally connected mine and are reaping...

What is TSN and do we really need it?

Whether or not TSN becomes an industry-wide standard remains to be seen.

AI and condition monitoring

The rise of artificial intelligence is seen as particularly useful in the field of condition...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd