LNS Research recently released the Analytics That Matter report centered around the top 3 reasons industrial analytics fail. Number 1 on that list was data quality issues. The author isn’t surprised by this, given that 70-80% of analytics and data science teams’ time is spent on data preparation and cleaning.
Aside from cleanliness, core data requirements include that the data is in the correct format, has the right context (meta-data), and is actually available when it’s needed. Even if a manufacturer has a data science team, they’re likely missing at least ⅓ of these requirements.
Required formatting is extremely specific to job role, line, product and more. This is why it’s important to have a mechanism to easily build out reports to enable many people without the use of IT, solutions integrators, or data engineers. In addition, true advanced industrial analytics solutions should enable deep dive drill down capabilities, presented in the correct format for specific problems.
Contextual data (sometimes called intersectional data or meta-data) is the linking of siloed sources of data with time-series data. For example, let’s say you wanted to compare the run speed of a line with quality measurements and any variances between shifts in the last week. If you had the data infrastructure set up to bring all those siloed data sources together, this would be a short query.
This has been challenging for manufacturing companies because it’s very difficult to match data across different processes, production, and business systems without the right partners. This means that drilling down into problems or surfacing product-line combination opportunities is extremely resource intensive and expensive.
Fortunately, Oden automatically cleans, enriches, and transforms raw data from machines into contextual reports, making it incredibly fast and efficient to get the insights you and your team needs.
Accessibility and availability are key to maximizing the impact of data. Delays caused by time intensive data cleansing could mean thousands of pounds of scrap produced, and utilization decreases. What most manufacturers find is that there is too much irrelevant or complicated data served to stakeholders to enable various teams with better data and insights. The way that data is presented is often complicated and difficult to understand, or the data is untrustworthy.
So why do these data challenges happen? Creating a common data model that will enable data cleanliness, accessibility, and formatting is extremely expensive. Requiring legacy systems to “speak the same data language” is a huge undertaking that many teams don’t have time for in their day to day firefighting. In addition, there are many ongoing costs, including data engineering, maintenance, and hosting to consider. If your manufacturing company doesn’t have a large IT resource, and is under $10 billion in annual revenue, it’s unlikely that it will ever be cost effective to build this out.
And trying to build advanced industrial analytics on their existing tools such as their machine control and ERP systems, or with their solutions integrators isn’t an effective path forward. The amount of configuration, custom work, and ongoing maintenance quickly spirals out of control. This is a complicated problem that truly requires a specialized partner.