Skip to content

How to turn your factory into a learning factory with machine learning

In the past 'few' years, everybody has been building so-called dashboards. In factories, people keep track of production quantities, wasted materials, unscheduled downtime, utilization of key equipment and many more data items, over shorter or longer periods. But why?
Learning factory machine learning


This is the headline

To really improve production processes, we need data from different sources

This is the headline

Dashboards are needed to get insight into this data and convert it into information

This is the headline

Machine Learning can help us analyze the data, and if applied in the right way it may even show unexpected correlation

Based on these KPIs, management decides on what should change in the factory. This often means that a deep dive is required into what's actually going on. So, to help improve factories, we need to facilitate the correct changes being made and to do that, we need to ensure that the source of the data underlying the KPIs is clear. If a section of a factory is suffering from unscheduled downtime, for example, that number itself doesn't mean much if no underlying data about the cause is available. Understaffing, machine breakage, material shortage - these could all be underlying reasons that should be tracked as well.

It's interesting to see how this works in a smart factory. Our goal is to use the data in a factory or group of factories to improve the production and logistics processes automatically. That means not only putting the collected data in a dashboard but also having an automated system interpret it and suggest improvements. The mechanism for this is machine learning.

"My vision is to feed the data we collect at various levels (machines, processes, production numbers, material flow, etc.) into a machine learning algorithm, or set of algorithms, aimed at optimising the factory."

The learning factory

It's my vision to be able to feed the data we collect on different levels (machines, processes, production numbers, material flow, and so on) to a machine learning algorithm, or set of algorithms, focused on optimizing the factory. The scope of such algorithms may vary based on the plant's needs: to improve production rates, reduce material losses or make the processes more sustainable. The challenge here is collecting the right data. Well, actually, it's not. The challenge is allowing the system itself to collect as much data as possible.

Let's look at that from the human perspective. Say we have a KPI dashboard that tells us that production rates go down every fourth week of the month. To find out the cause, we have to look at the underlying data for a trigger. If our KPI is directly based on the number of finished products leaving the factory and the downtime of the final packaging department, for example, that gives us a clue where to look but not what to look for. It could be that maintenance is scheduled for that fourth week in such a way that it affects production processes. It could also be that too many operators take time off at the same time in the last week of the month or that the ordering process is such that in that fourth week certain parts or materials are no longer in stock. This very simple example already shows that it's very hard to predict what data needs to be collected and combined to analyze what needs to be done.

Machine learning, at first glance, doesn't appear to solve the problem. How can we define an algorithm that does the work for us? Don't algorithms need input parameters to do their job? Luckily, with machine learning, that's not entirely true, in the sense that the parameters may not have to be known upfront.

"There are several machine learning approaches to solving problems. The supervised and the unsupervised approaches."

There are different machine learning approaches for solving problems. In the supervised approach, the algorithm is given a labeled set of data to learn from: the type of data it gets is predefined and it uses that knowledge in the analysis. This allows it to predict the effect of certain predefined parameters on e.g. production rate - similar to a human analyst looking at a dashboard.
In the unsupervised approach, unlabeled data is provided, without a predefined meaning attached. This allows the algorithm to correlate patterns in different sets of the data, even if it doesn't know what they mean. Of course, the meaning has to be added by human users later on, which may also be difficult.

Combining the two, semi-supervised learning is an interesting path to investigate for smart factories. The idea is to feed an algorithm data from a factory, label the known elements (downtime, maintenance, logistics numbers) and have an algorithm correlate that with other data to discover patterns we haven't seen yet. A possible, seemingly far-fetched, example: how does traffic on Thursday affect the material intake of a factory on that day, and indirectly the production rate on Friday?

Data retrieval must not disrupt the process

Machine learning would really make a factory smart, but it has some preconditions. We need data, we need data sources and we need to make sure we collect data in a non-intrusive way. We won't be allowed to change the machinery, the computer software and the machine controllers in the factory to gather this seemingly 'random' data.

Our challenge is clear. We're on it.


This is the headline

Don't underestimate the importance of data collected from even the smallest component in a production environment

This is the headline

Machine learning is not the holy grail, but if applied correctly it may give very interesting results

Picture of Delphino consultancy

Delphino consultancy

All services where you need a specialist in software and systems architecture.

See Company
Sign up for our newsletter and receive updates.