It has the capacity to make production more efficient by increasing output while maintaining quality standards. Machine learning and industrial automation in manufacturing promises to overcome many of the industries most pressing challenges—including diminishing contribution margins and an expected skilled labor shortage. With continued advances in algorithms, computing power, and data availability, machine learning use cases in manufacturing are quickly emerging.
As industrial automation plays an ever larger role in manufacturing, the deep insights machine learning can offer are crucial for production optimization. But before manufacturers can introduce a machine learning platform, they must first understand how these solutions operate in a production environment, and how to choose the right one for their needs.
There’s data from machines and sensors on the factory floor, operator input data, and data from ERP, MES and quality systems. Traditionally, engineers have spent a significant amount of time gathering and transferring from disparate sources in order to analyze outcomes, identify issues, and recommend process improvements.
Factory data is interconnected; products runs, work orders, batches with process parameters and offline quality metrics all inform each other. When data exists in isolation and there’s a problem—a quality failure or a machine breakdown—precious time is wasted gathering data.
It can take days or even weeks to conduct a root cause analysis or offer process recommendations because teams have to manually compare data from siloed sources. Delays in obtaining accurate analysis and insights can lead to extended downtime, material waste, increased scrap rates, and ultimately lower contribution margins for manufacturers.
Advancements in cloud computing, however, have made it easier to both store and process tremendous amounts of data. The advent of machine learning technologies has made it easier to analyze data from various disparate sources and determine actionable insights.
Machine learning applications in manufacturing leverage data gathered at different machines and sensors, and at different points in the manufacturing process along with real-time operator data, offline quality data, and data from historians and MES and ERP systems.
Machine learning for production optimization, quality control, and other applications allow manufacturers to identify and solve problems quicker. Predictive quality analytics help manufacturers predict and prevent problems while prescriptive analytics maximize quality and output by enabling quicker actions to insights.
Manufacturers can prevent quality failures and reduce extensive scrap rates with machine learning based predictive quality applications. By combining live production data with offline quality and MES/ERP system data, machine learning in manufacturing can identify patterns and relationships between different variables. For instance, an algorithm can determine the impact of melt pressure on product density and diameter—if pressure is too low, it can negatively affect product quality and increase scrap rates. Furthermore, models can be leveraged to identify leading indicators for quality failures.
Predictive alerts can notify operators any time such leading indicators are identified, such as when melt pressure is trending towards below normal thresholds. Factory personnel are provided the lead time necessary to take corrective action, fixing the melt pressure problem and avoiding extensive production of defective products. Machine learning technologies can also provide recommendations for settings that will allow production increases while maintaining quality standards without increasing scrap.
Consistency is key to maintaining a cohesive ecosystem that enables data-driven decision making. Historical data and live production floor data must be aligned. All data must be cleaned, formatted, contextualized, and organized into a taxonomy in order for machine learning technology to operate effectively.
A taxonomy—a hierarchy of labels and classifications derived from knowledge of the domain—provides context to the data. For example, while there might be separate metrics for machine temperature and material temperature; a taxonomy helps identify that these are both types of temperature metrics and also indicates which part of the process they control or affect.
It is also critical to align metrics with metadata such as the product, machine or quality state in order to make any analysis meaningful. For example, if a product tested offline is defective, but you don’t know which line or shift that product was produced on, it’s impossible to go back and find what other products might have been affected by the quality failure.
In this step, different types of algorithms are used to perform conversions, transformations and complex calculations to the contextualized data. These calculations might be simple like converting from Fahrenheit to Celsius or they can be more complex such as computing windowed temporal aggregations.
One scenario from cable manufacturing is measuring the diameter of a product. You might measure the diameter from top to bottom and then measure it from left to right. The algorithm will average these two measurements as well as calculate an acceptable standard deviation. For example, if the cable should be 10 centimeters but can go either up to 10.15 centimeters or down to 9.85 centimeters.centimeters but can go either up to 10.15
Measure diameter of the cable—measure diameter x and y (from top to bottom and left ot right) = x+y/2
Computing lagging mean and standard deviation: mean over last 5 min or standard deviation over last 5 min mean +/- 3 deviations
To leverage machine learning in manufacturing, you also need accessible and immediate insights through easy-to-understand formats such as charts, graphs, and other visuals.
Software platforms that leverage machine learning technology offer data visualization that support the results via interactive dashboards. A comprehensive dashboard gives a snapshot of factory performance, including:
Output by product, line or shift
Products with the largest changeover
Reasons for unscheduled downtime
Machine learning builds models that are trained to address several production scenarios. An algorithm will analyze live and historical production data to identify patterns in behavior that have previously led to issues on the factory floor (including quality failures or unplanned downtime). It will then establish relationships, ranging from strong correlations to cause-and-effect constraints, between different process parameters and outcomes. Several algorithms and types of models, along with their parameter settings, can be used during this process.
Then the trained models are validated against unseen data, specifically data that is known to exhibit the issues that the model was trained to capture.. This allows tuning algorithm performance and generalizing the model, to ensure models can operate in a real production setting. Models are then deployed into a production operational environment, running against live data, either in the cloud or on the edge. Real-time process models look for specific patterns or a set of conditions that indicate a quality or machine failure may happen in the future.
Once those models are trained, machine learning technologies will then identify patterns in key process parameters and their behavior that have previously led to problems on the factory floor, often as leading indicators.
When models are run against real-time production data, predictive analytics look for a specific set of conditions that indicate a quality or machine failure may happen. For example, machine learning models can predict that a quality failure will occur in 10 minutes because in the past when line speed has dropped, products have not met quality standards.
Models are validated—measured against historical and current performance data. They can be validated “live,” which is in real-time as production happens. Validation provides insights into how models are generalized. For example, past errors could have been avoided or outcomes improved, thereby informing current production.
Predictive alerts are then deployed to help factory personnel proactively take action and avoid problems. Alerts are generated when the model identifies the patterns in the key parameters that indicate quality failures. For example, melt pressure needs to stay within a certain range for optimal quality. Any time it goes above or below those parameters an alert will be generated.
Alerts can be customized based on factory floor conditions or combinations of metrics and their temporal behavior. An alert can be triggered if line speed drops five times over ten minutes but not if it only drops once. This prevents redundant or irrelevant alerts that are ignored or cause operators to stop production unnecessarily.
Machine learning applications in manufacturing go beyond predictive to prescriptive to help optimize production. By analyzing available data, machine learning technology is able to identify the best and sub-optimal performing segments, as well as the key variables that impact quality, performance, and utilization. Prescriptive analytics recommends machine and component settings, enabling improvement of process parameters, such as performance targets to consistently replicate the most efficient runs, while maintaining high production quality.
Automated reports highlight key differences between good and bad runs, and key parameters to help ensure operators maintain better process control across lines to better enable control across factories.
Alerts are generated by coupling these prescriptive analytic recommended settings with live monitoring and predictive models. These live models can be used to provide alerts indicating early warnings of departure from optimized settings, allowing factory personnel to proactively address production issues.
For example, one manufacturer increased order efficiency by implementing machine learning recommended settings adjustments so that it could produce more in less time while maintaining quality—saving over 200 production hours. This allowed them to lower operating costs associated with individual orders as well as fill subsequent orders faster by reducing the opportunity cost of machine time.
The importance of assessing the various machine learning solutions available today before you make a commitment cannot be overstated.
To that end, here are the key elements to look for when choosing a machine learning application for your manufacturing environment.
The best of both worlds—a cloud-edge hybrid offers the advantages of cloud storage and elastic computation, along with local accessibility. This combination offers the utmost in flexibility along with the choice of deploying models where you need them.
Cloud offers better computing power and economies of scale for data storage and machine learning model training requiring large magnitude data analyses, building of sophisticated models, and hyper-parameter optimization.
Edge offers hyper-local storage, which reduces latency and removes dependency on external connectivity—meaning high reliability and quicker insights. This is important for situations where business continuity is crucial.
New machine learning platforms are flexible and can be integrated into existing infrastructures, which is great news for manufacturers who don’t want to create an intelligent factory from scratch.
The optimal machine learning solution will have the flexibility to work with your current applications, and help you quickly realize value from new products and versions. It will also be cost-effective for your business use cases, and quickly provide return on your investment.
Also, remember that you’ll need the right kind of data, and enough of it, to run and get the value of any machine learning platform.