Skip to content

Embracing AI to increase equipment effectiveness

Since data is the new oil, manufacturing companies are sitting on a real oil well. The machines they use to make their products also generate a wealth of data. Using data science and artificial intelligence, they can unlock this hidden treasure and dramatically improve their operations.

Almost all electronic devices today are equipped with one or more sensors. They collect a wealth of data - about how the systems are doing, what they are doing and in what conditions they are doing it. All this data can be of great value to your business. However, most of the potential is currently left untapped. This is a shame, because it gives a company a unique opportunity to improve what is known as overall equipment effectiveness (OEE).

OEE is a measure of how well your production process is utilized in terms of facilities, time and materials, compared to its full potential, during the periods it is scheduled to run. It identifies the percentage of production time that is actually productive. An OEE of 100 percent means you are making flawless products (100 percent quality), at maximum speed (100 percent performance), without interruption (100 percent availability). By measuring OEE and the underlying losses, you gain important insights on how to systematically improve your production process.

Setting up machine monitoring

Measuring OEE means pulling data from your systems and sending it to the cloud for analysis. This comes with a number of challenges. One major obstacle is sentiment: perhaps there is a feeling in your company that you would rather not allow your data to be stored outside your own organization, especially data that could reveal business-sensitive information. However, with the right security measures, the risk of your data falling into the wrong hands can be minimized, while anonymization can adequately address any privacy concerns. And you can always decide not to disclose the data that really gives you a competitive advantage.

Given that this sentiment can be addressed, there is the challenge of setting up the data processing pipeline. Your machine park likely contains systems from different vendors, each with their own control and communication technology, of varying ages, from new with modern capabilities to decades old with limited control and connectivity. At first, it will take some effort to align your systems and their data flows and bring them together into one environment where you can assess your OEE.

Once you have the infrastructure in place and your pipeline operational, there is the challenge of making sense of all the data that comes from it. This may look like a mountain of work, taking a lot of time and effort to climb. But with specialized help and intelligent use of smart tools, you can actually start small and manageable.

Connection to the network

Creating a data processing pipeline starts with connecting your machine farm to your corporate network. Next, the data is extracted from the machine control systems and sent to the cloud where it is securely stored for further analysis. There is a wide range of data anayltics tools available that can be used to turn your data into valuable information that is fed back to a dashboard on your personal computer.

The newer systems in your machine fleet are likely equipped with sensors and wireless connectivity that allow remote access to the machine's sensor data. Older machines may need to be retrofitted with such sensors and network connectivity. The hardware required for this is inexpensive and easy to retrofit.

Your systems may not be equipped to provide all the data you need to adequately assess your OEE. For example, they may not be measuring an important characteristic or the sampling rate may be too low. Retrofitting sensors provides a simple way to address such shortcomings.

Pre-processing of data flows

Different systems use different communication protocols, such as Modbus, MTConnect or OPC UA. In order to correlate data from different sources, the different data streams must be structured in a uniform way. Here, artificial intelligence (AI) plays an important role. Machine learning (ML) algorithms can be developed that analyze the raw data stored in the cloud for (recurring) patterns and structure it accordingly. For example, unsupervised learning infers the organization of the data and constructs a model without pre-existing labeling and without human supervision. However, it is usually more efficient to have a service engineer assist in interpreting the data.

Once structured, the data will need to be cleaned and prepared for analysis. There may be some noise mixed in, such as sensor hiccups and other one-off events that are not relevant to your OEE assessment. Some ML techniques can identify and filter out these meaningless quirks so that your data is ready for the heavy lifting.

The volume of data produced by machines can be a problem in itself, exceeding the capacity of your network connection to the cloud. In this case, an additional "edge computing" layer is placed between the machine farm and the cloud to prepare for and reduce the volume of data being moved to the cloud. Edge computing also allows you to gain insight into the operation of the machine park at an earlier time or in the situation that your internet connection goes down.

Extracting information from data

For more powerful analysis, ML and other AI algorithms can be unleashed on the data lake in the cloud. They extract valuable information from the stored, structured and cleansed data. For example, they can look for what are known as the "six big losses" of OEE: production rejects and rejects at startup (quality), minor stops and speed losses (performance), and planned downtime and outages (availability). The results, presented in the dashboard, allow you to determine the appropriate course of action.

For example, anomaly detection can identify rare events that may be suspicious because they deviate significantly from the majority of the data. Typically, the anomalous events will translate into some sort of problem, such as a machine ending up in an inherently different state than typical for a given operation. Detecting such outliers can indicate a potential quality problem with a part being manufactured or a performance problem with the machine itself.

Cloud serice providers offer all kinds of automated tools for building ML applications. These out-of-the-box products are easy to use yet very generic. If you want to get the most out of your application, you will probably have to develop your own models. This usually requires an in-depth knowledge of what makes a good dataset for your application and requires specialized expertise in data science.

Deployment of data scientists

Thus, in these AI-supported transformation and decision-making processes, data scientists play a key role. They unite statistics, data analysis, machine learning and their related methods to understand and analyze actual phenomena with data. Instead of just seeing numbers, a data scientist understands what they mean and how to use the AI toolbox to get the desired information.

You can hire a data scientist or you can buy the service from a consulting firm. The main reason for hiring one yourself is that an in-house specialist can quickly master your domain. In addition, hiring your own specialist also ensures continuity and prevents the process from going wrong after a project ends. It is also much easier to use data internally than to share it with an external party where trust is not immediately ingrained.

Getting Started

With specialized expertise and adequate tooling, it is quite easy to turn your machinery into a data factory. By connecting your production systems and having their sensor readings automatically processed and presented, you can gain valuable insights into the quality, performance and availability of your production. You will have actionable information that will enable you to tune these parameters and increase your OEE.

All the necessary expertise and tooling is readily available. You can start small and keep it that way, bringing in outside help to get things up and running and having a periodic audit. Or you can make it as big as you want, by setting up your own data science machinery. Either way, you can benefit greatly by tapping into your data properly and using its resources to lubricate your production process.

Members of the High Tech Software Cluster can help you with specialized expertise and essential tooling. Contact us for more information.



The High Tech Software Cluster is a partnership of over 30 innovative software companies, research organizations and educational institutions that support you in making the digitization of your business affordable and practical.

Read more
Sign up for our newsletter and receive updates.