Edge computing for industrial AIoT applications
The advent of the Industrial Internet of Things (IIoT) has allowed a wide range of businesses to collect massive amounts of data from previously untapped sources and explore new avenues for improving productivity. By obtaining performance and environmental data from field equipment and machinery, organisations now have even more information at their disposal to make informed business decisions. Unfortunately, there is far too much IIoT data for humans to process alone, so most of this information goes unanalysed and unused1. Consequently, it is no wonder that businesses and industry experts are turning to AI and machine learning (ML) solutions for IIoT applications to gain a holistic view and make smarter decisions more quickly.
Most IIoT data goes unanalysed
The staggering number of industrial devices being connected to the internet continues to grow year after year and is expected to reach 41.6 billion endpoints in 2025, according to a 2019 report by IDC. What’s even more mind-boggling is how much data each device produces. In fact, manually analysing the information generated by all the sensors on a manufacturing assembly line could take a lifetime2. It’s no wonder that less than half of an organisation’s structured data is actively used in making decisions — and less than 1% of its unstructured data is analysed or used at all. In the case of IP cameras, only 10% of the nearly 1.6 exabytes of video data generated each day gets analysed, according to Intel. These figures indicate a staggering oversight in data analysis despite our ability to collect more and more information. This inability for humans to analyse all of the data we produce is precisely why businesses are looking for ways to incorporate AI and ML into their IIoT applications.
Imagine if we relied solely on human vision to manually inspect tiny defects on golf balls on a manufacturing assembly line for eight hours each day, five days a week. Even if you could afford a whole army of inspectors, each person is still naturally susceptible to fatigue and human error. Likewise, manual inspection of high-voltage powerlines and substation equipment also exposes human personnel to safety risks.
Combining AI with IIoT
In industrial applications, the AIoT offers the ability to reduce labour costs, reduce human error and optimise preventive maintenance. The ‘Artificial Intelligence of Things’ (AIoT) refers to the adoption of AI technologies in IoT applications for the purposes of improving operational efficiency, human-machine interactions, and data analytics and management3. But how exactly does AI fit into the IIoT?
Since AI is such a broad discipline, the following discussion focuses on how computer vision or AI-powered video analytics — other subfields of AI often used in conjunction with ML — are used for classification and recognition in industrial applications. From data reading in remote monitoring and preventive maintenance, to identifying vehicles for controlling traffic signals in intelligent transportation systems, to agricultural drones and outdoor patrol robots, to automatic optical inspection of tiny defects in golf balls and other products, computer vision and video analytics are unleashing greater productivity and efficiency for industrial applications.
Moving AI to the IIoT edge
As previously mentioned, the proliferation of IIoT systems is generating massive amounts of data. For example, the multitude of sensors and devices in a large oil refinery generates 1 TB of raw data per day. Immediately sending all this raw data back to a public cloud or private server for storage or processing would require considerable bandwidth, availability and power consumption. In many industrial applications, especially highly distributed systems located in remote areas, constantly sending large amounts of data to a central server is not possible. Mission-critical industrial applications must be able to analyse raw data as quickly as possible.
In order to reduce latency, reduce data communication and storage costs, and increase network availability, IIoT applications are moving AI and machine learning capabilities to the edge of the network to enable more powerful pre-processing capabilities directly in the field. More specifically, advances in edge computing processing power have enabled IIoT applications to take advantage of AI decision-making capabilities in remote locations. Indeed, by connecting your field devices to edge computers equipped with powerful local processors and AI, you no longer need to send all of your data to the cloud for analysis.
Choosing the right edge computer for industrial AIoT
When it comes to bringing AI to your industrial IoT applications, there are several key issues you need to consider. Even though most of the work involved with training your AI models still takes place in the cloud, you’ll eventually need to deploy your trained inferencing models in the field using AIoT edge computing. In order to effectively run AI models and algorithms, industrial AIoT applications require a reliable hardware platform at the edge. To choose the right hardware platform for your AIoT application, we need to consider the following factors.
Processing requirements for different phases of AI implementation
Generally speaking, processing requirements for AIoT computing are concerned with how much computing power you need and whether you need a CPU or accelerator. Since each of the following three phases in building an AI edge computing application uses different algorithms to perform different tasks, each phase has its own set of processing requirements.
Data collection
The goal of this phase is to acquire large amounts of information to train the AI model. Raw, unprocessed data alone is not helpful because the information could contain duplications, errors and outliers. Pre-processing collected data at the initial phase to identify patterns, outliers and missing information also allows you to correct errors and biases. Depending on the complexity of the data collected, the computing platforms typically used in data collection are usually based on Arm Cortex or Intel Atom/Core processors. In general, I/O and CPU specifications (rather than the GPU) are more important for performing data collection tasks.
Training
AI models need to be trained on advanced neural networks and resource-hungry machine learning or deep learning algorithms that demand more powerful processing capabilities, such as powerful GPUs, to support parallel computing in order to analyse large amounts of collected and pre-processed training data. Training your AI model involves selecting a machine learning model and training it on your collected and pre-processed data. During this process, you also need to evaluate and tune the parameters to ensure accuracy. Many training models and tools are available for you to choose from, and training is usually performed on designated AI training machines or cloud computing services.
Inferencing
The final phase involves deploying the trained AI model on the edge computer so that it can make inferences and predictions based on newly collected and pre-processed data quickly and efficiently. Since the inferencing stage generally consumes fewer computing resources than training, a CPU or lightweight accelerator may be sufficient for your AIoT application. Nonetheless, you will need a conversion tool to convert the trained model to run on specialised edge processors and accelerators, such as Intel OpenVINO or NVIDIA CUDA. Inferencing also includes several different edge computing levels and requirements, which are discussed in the following section.
Edge computing levels
Although AI training is still mainly performed in the cloud or on local servers, data collection and inferencing necessarily take place at the edge of your network. Moreover, since inferencing is where your trained AI model does most of the work to accomplish the application objectives, you need to determine which of the following levels of edge computing you need in order to choose the appropriate processor.
Low edge computing level
Transferring data between the edge and the cloud is not only expensive, but also time-consuming, and it results in latency. With low edge computing, you only send a small amount of useful data to the cloud, which reduces lag time, bandwidth, data transmission fees, power consumption and hardware costs. An Arm-based platform without accelerators can be used on IIoT devices to collect and analyse data to make quick inferences or decisions.
Medium edge computing level
This level of inference can handle various IP camera streams for computer vision or video analytics with sufficient processing frame rates. Medium edge computing includes a wide range of data complexities based on the AI model and performance requirements of the use case. Most industrial edge computing applications also need to factor in a limited power budget or fanless design for heat dissipation. It may be possible to use a high-performance CPU, entry-level GPU or VPU at this level. For instance, the Intel Core i7 series CPUs offer an efficient computer vision solution with the OpenVINO toolkit and software based AI/ML accelerators that can perform inference at the edge.
High edge computing level
High edge computing involves processing heavier loads of data for AI expert systems that use more complex pattern recognition. High edge compute level inferencing generally uses accelerators, including a high-end GPU, VPU, TPU or FPGA, which consumes more power (200 W or more) and generates excess heat. Since the necessary power consumption and heat generated may exceed the limits at the far edge of the network, high edge computing systems are often deployed in near-edge sites to perform tasks.
Development tools
Several tools are available for various hardware platforms to help speed up the application development process or improve overall performance for AI algorithms and machine learning.
Deep learning frameworks
Consider using a deep learning framework, which is an interface, library or tool that allows you to build deep learning models more easily and quickly, without getting into the details of the underlying algorithms. Deep learning frameworks provide a clear and concise way for defining models using a collection of pre-built and optimised components. The three most popular include PyTorch, TensorFlow and Caffe.
Hardware-based accelerator toolkits
AI accelerator toolkits are available from hardware vendors and are specially designed to accelerate artificial intelligence applications, such as machine learning and computer vision, on their platforms.
Examples include the Open Visual Inference and Neural Network Optimization (OpenVINO) toolkit from Intel — designed to help developers build robust computer vision applications on Intel platforms — and the Compute Unified Device Architecture (CUDA) from NVIDIA for enabling high-performance parallel computing for GPU-accelerated applications.
Environmental considerations
Last but not least, you also need to consider the physical location of where your application will be implemented. Industrial applications deployed outdoors or in harsh environments — such as oil and gas, mining, power or outdoor robot applications — should have a wide operating temperature range and appropriate heat dissipation mechanisms to ensure reliability in blistering hot or freezing cold weather conditions. Certain applications also require industry-specific certifications or approvals, such as fanless design, explosion-proof construction and vibration resistance. And since many real-world applications are deployed in space-limited cabinets and subject to size limitations, small form factor edge computers are preferred. Highly distributed industrial applications in remote sites may also require communications over a reliable cellular or Wi-Fi connection.
Conclusion
Enabling AI capabilities at the edge allows you to effectively improve operational efficiency and reduce risks and costs for your industrial applications. Choosing the right computing platform for your industrial AIoT application should also address the specific processing requirements at the three phases of implementation: data collection, training and inference. For the inference phase, you also need to determine the edge computing level (low, medium or high) so that you can select the most suitable type of processor.
By carefully evaluating the specific requirements of your AIoT application at each phase, you can choose the best-suited edge computer to sufficiently and reliably perform industrial AI inferencing tasks in the field.
References
- IEEE 2020, “IoT Data Needs Artificial Intelligence”, IEEE Innovation at Work, <<https://innovationatwork.ieee.org/iot-data-artificial-intelligence>>
- Stack T 2018, “Internet of Things (IoT) Data Continues to Explode Exponentially. Who Is Using That Data and How?”, Cisco Blogs, <<https://blogs.cisco.com/datacenter/internet-of-things-iot-data-continues-to-explode-exponentially-who-is-using-that-data-and-how>>
- TechTarget 2019, “Artificial Intelligence of Things (AIoT)”, IoT Agenda, <<https://internetofthingsagenda.techtarget.com/definition/Artificial-Intelligence-of-Things-AIoT>>
Mineral processing: a eulogy for analog
Leading mines have already accomplished an automated, digitally connected mine and are reaping...
What is TSN and do we really need it?
Whether or not TSN becomes an industry-wide standard remains to be seen.
AI and condition monitoring
The rise of artificial intelligence is seen as particularly useful in the field of condition...