Transforming industrial processes with AI-based anomaly detection
Industrial processes and machinery are relied upon to be predictable and precise. Unexpected patterns in sensor data, known as anomalies, may indicate a problem such as a faulty component or a degraded sensor. AI-based anomaly detection helps engineers identify these potential problems early, which enables them to optimise maintenance schedules and improve process efficiency. With 86% of manufacturing executives recognising that smart factories will drive competitiveness in the next five years, AI will play an important role in manufacturing.
As machine complexity has increased in modern factories, traditional anomaly detection methods have proven insufficient. Engineers and technicians used to rely on manual data inspection or automated alerts when sensor values crossed defined thresholds. Engineers cannot analyse thousands of sensors at once and inevitably miss anomalies that manifest as complex, hidden patterns across many sensors.
Because of these challenges, today’s engineers in the manufacturing industry are using AI to improve the scale and accuracy of anomaly detection. AI algorithms can be trained on massive amounts of data from thousands of sensors to pinpoint complex anomalies that humans cannot identify by eye. By combining the scale of AI with the contextual domain knowledge of engineers, manufacturing organisations can create a comprehensive anomaly detection solution.
Designing an AI-based anomaly detection solution
Designing an AI-based anomaly detection solution is a comprehensive process, from conceptualisation and data gathering to deployment and integration. Engineers must have a deep understanding of both algorithm development and the operational environment to develop a solution that can effectively identify potential issues.
Planning and data gathering
Designing an AI-based anomaly detection begins with defining the problem. This involves assessing the available sensor data, the components or processes, and the types of anomalies that could occur. For organisations new to AI, it is important to start with a scoped proof-of-concept project whose success will provide clear value to the organisation before moving on to larger initiatives.
High-quality data is crucial for AI systems. Engineers must first define what constitutes an anomaly and the conditions that categorise data as anomalous. Gathering data involves using sensors to continuously monitor equipment, processes, and manual checks to ensure data accuracy.
Data exploration and preprocessing
Data for anomaly detection typically comes from sensors such as temperature, pressure, vibration, voltage, and other measurements collected over time. It may also include related quantities like environmental data, maintenance logs, and operational parameters. The first step in designing an anomaly detection algorithm involves organising and preprocessing the data so that it is suitable for analysis. This may include steps like reformatting and restructuring the data, extracting the relevant pieces to the problem, handling missing values, or removing outliers.
The next step is to select an anomaly detection technique, which requires assessing the characteristics of the data, nature of the anomalies, and available computational resources.
Model selection and training
There are many approaches to training an AI model for anomaly detection, and it is important to experiment with different techniques to see what works best for a specific dataset. At a high level, AI techniques can be divided into supervised and unsupervised learning approaches depending on the type of data available.
1. Supervised learning
Supervised learning is used for anomaly detection when chunks of the historical data can be clearly labelled as normal or anomalous. Labelling is often done manually by engineers who can align it with maintenance logs or historical observations. By training on this labelled dataset, the supervised learning model learns relationships between patterns in the data and their corresponding labels. Tools like the Classification Learner in MATLAB help engineers experiment with multiple machine learning methods at once to see which model performs best. A trained model can predict whether a new chunk of sensor data is normal or anomalous.
2. Unsupervised learning
Many organisations do not have the labelled anomalous data required for a supervised learning approach. This may be because anomalous data has not been archived, or because anomalies do not occur often enough for a large training dataset. When most or all of the training data is normal, unsupervised learning is needed.
In an unsupervised learning approach, the model is trained to understand solely the characteristics of normal data, and any new data that is outside the normal range is flagged as an anomaly. Unsupervised models can analyse sensor data to identify unusual patterns that may indicate a problem, even if that type of failure has not been previously encountered or labelled.
3. Feature engineering
Although some AI models are trained on raw sensor data, it is often more effective to extract useful features from the data before training through a process called feature engineering. Feature engineering is the process of extracting useful quantities from raw data, which helps AI models learn more efficiently from the underlying patterns. Experienced engineers may already know the types of features that are important to extract from the sensor data. Predictive Maintenance Toolbox provides interactive tools for extracting and ranking the most relevant features in a dataset to enhance the performance of supervised or unsupervised AI models.
Some types of data, such as images or text logs, benefit from deep learning approaches that can extract patterns automatically without requiring explicit feature extraction. These deep learning approaches are powerful, but also require larger training datasets and computational resources.
4. Validation and testing
Validating and testing AI models ensures their reliability and robustness. Typically, the data is split into three parts: training, validation, and test sets. Training and validation data are used to tune the model parameters during the training phase, and test data is used after the model is trained to determine its performance on unseen data. Engineers can also evaluate the model using performance metrics, such as precision and recall, and fine-tune to meet the needs of the specific anomaly detection problem.
5. Deployment and integration
A trained and tested AI model becomes useful when it is deployed in operation and begins making predictions on new data. When selecting an appropriate deployment environment, engineers consider factors like computational needs, latency, and scalability. This ranges from edge devices close to the manufacturing process to on-premises servers and cloud platforms with nearly unlimited computational power but higher latencies. Deployment tools like MATLAB Compiler and MATLAB Coder enable engineers to generate standalone applications and code that can be integrated into other software systems.
Integration requires developing APIs for access to the model’s predictions and establishing data pipelines to ensure the model receives properly formatted and preprocessed input. Integration ensures the model works with other components of the application or system and delivers its full value.
Conclusion
AI-based anomaly detection is a significant advancement in the quest for manufacturing efficiency and cost-effectiveness. AI coupled with the expertise of engineers and the latest technological advancements enables manufacturers to significantly reduce the incidence of defects, optimise maintenance schedules, and enhance overall productivity. Integrating AI into manufacturing processes may be complex, but the potential rewards in terms of efficiency, cost savings, and competitive advantage are immense. As the manufacturing industry evolves, the role of AI in driving innovation and operational excellence will undoubtedly continue to grow.
In a digital world, why is value still hard to find?
At all levels of an industrial system there are a number of barriers that make it difficult to...
Machine learning in manufacturing process control: How ARDI enhances operational efficiency
With increasing data availability, machine learning has become a powerful tool in the...
Verifying and Validating AI in Safety-Critical Systems
In the era of AI-enabled safety-critical systems, validation and verification is becoming crucial...