You are reading Flux Capacitor, the company weblog of Fluxicon.
Here, we write about process intelligence, development, design, and everything that scratches our itch. Hope you like it!

Regular Updates? Use this RSS feed, or subscribe to get emails here.

You should follow us on Twitter here.

Data Quality Problems in Process Mining and What To Do About Them — Part 11: Data Validation Session with Domain Expert

Expert interview

This is the eleventh article in our series on data quality problems for process mining. You can find an overview of all articles in the series here.

A common and unfortunate process mining scenario goes like this: You present a process problem that you have found in your process mining analysis to a group of process managers. They look at your process map and point out that this can’t be true. You dig into the data and find out that, actually, a data quality problem was the cause for the process pattern that you discovered.

The problem with this scenario is that, even if you then go and fix the data quality problem, the trust that you have lost on the business side can often not be won back. They won’t trust your future results either, because “the data is all wrong”. That’s a pity, because there could have been great opportunities in analyzing and improving this process!

To avoid this, we recommend to plan a dedicated data validation session with a process or domain expert before you start the actual analysis phase in your project. To manage expectations, communicate that the purpose of the session is explicitly not yet to analyze the process, but to ensure that the data quality is good before you proceed with the analysis itself.

You can ask both a domain expert and a data expert to participate in the session, but especially the input of the domain expert is needed here, because you want to spot problems in the data from the perspective of the process owner for whom you are performing the analysis (you can book a separate meeting with a data expert to walk through your data questions later). Ideally, your domain expert has access to the operational system during the session, so that you can look up individual cases together if needed.

To organize the data validation session with the domain expert, you can do the following:

You may find that the domain expert brings up questions about the process that are relevant for the analysis itself. This is great and you should write them down, but do not get side-tracked by the analysis and steer the session back to your data quality questions to make sure you achieve the goal of this meeting: To validate the data quality and uncover any issues with the data that might need to be cleaned up.

After the validation session, follow-up on all of the discovered data problems and investigate them. Also, keep track which of your original process questions may be affected by the data quality issues that you found. Document the actions that you have taken, or intend to take, to fix them.

There are no comments for this article yet. Add yours!
Data Quality Problems in Process Mining and What To Do About Them — Part 10: Missing Timestamps For Activity Repetitions

This is the tenth article in our series on data quality problems for process mining. You can find an overview of all articles in the series here.

Last week, we were looking at missing activities and missing timestamps. Today, we will discuss another common data quality problem that I am sure most of you will encounter at some point in time in the future.

Take a look at the following data snippet (you can click on the image to see a larger version). In this data set, you can see three cases (Case ID 1, 2, and 3). If you compare this data set below with a typical process mining data set, you can see the following differences:

Event Log in Excel (click to enlarge)

When you encounter such a data set, you will have to re-format it into the process mining format in the following way (see screenshot below):

Transformed Event Log (click to enlarge)

However, the important thing to realize here is that this is not purely a formatting problem. The column-based format is not suitable to capture event data about your process, because it inherently loses information about activity repetitions.

For example, imagine that after performing process step D the employee realizes that some information is missing. They need to go back to step C to capture the missing information and will only then continue with the proces step E. The problem with the column-based format as shown in the first data snippet is that there is no place where these two timestamps regarding activity C can be captured. So, what happens in most situations is that the first timestamp of activity C is simply overwritten and only the latest timestamp of activity C is stored.

You might wonder why people store process data in this column-based format in the first place. Typically, you find this kind of data in places, where process data has been aggregated. For example, in a data warehouse, BI system, or an Excel report. It’s tempting, because in this format it seems easy to measure process KPIs. For example, do you want to know how long it takes between process step B and E? Simply add a formula in Excel to calculate the difference between the two timestamps.1

People often implicitly assume that the process goes through the activities A-E in an orderly fashion. But processes are really complex and messy in reality. As long as the process isn’t fully automated, there is going to be some rework. And by pressing your data in such a column-based format you lose information about the real process.

So what can you do if you encounter your data in such a column-based format?

How to fix:

First of all, you should use the data that you have and transform it into a row-based format like shown above. However, in the analysis you need to be aware about the limitation of the data and know that you can encounter some distortions in the process because of it (see an example below).

If the process is important enough, you might want to go back in the next iteration and find out where the original data that was aggregated in the BI tool or Excel report comes from. For example, it might come from an underlying workflow system. You can then get the full history data from the original system to fully analyze the process with all its repetitions.

To understand what kind of distortions you can encounter, let’s take a look at the following data set, which shows the steps that actually happened in the real process before the data was aggregated into columns. You can see that:

Real Event Log (click to enlarge)

Now, when you first import the data set that was transformed from the column-based format to the row-based format into Disco, you get the following simplified process map (see below).

Discovered Process Transformed Event Log

The problem is that if a domain expert would look at this process map, they might see some strange and perhaps even impossible process flows due to the distortions from the lost activity repetition timestamps. For example, in the process map above it looks like there was a direct path from activity B to activity D at least once.

However, in reality this never happened. You can see the discovered process map from the real data set (where all the activity repetitions are captured) below. There was never a direct succession of the process steps B and D, because in reality activity C happened in between.

Discovered Process Real Event Log

So, use the data that you have but be aware that such distortions can happen and what is causing them.

The process maps above were simplified process maps (see this guide on simplifying complex process models to learn more about the different simplification strategies). If you are curious to see the full details of each map to make sure there was really no path from activity B to activity D, you can find them below:

Full Process Transformed Event Log Full Process Real Event Log

  1. Another danger of this approach is that if the two steps are not in the expected order, you will actually end up with a negative duration.  
There are no comments for this article yet. Add yours!
Data Quality Problems in Process Mining and What To Do About Them — Part 9: Missing Timestamps

Missing timestamps

This is the ninth article in our series on data quality problems for process mining. You can find an overview of all articles in the series here.

Earlier in this series, we have talked about how missing data can be a problem. We looked at missing events, missing attribute values, and missing case IDs. But what do you do if you have missing activities, or missing timestamps for some activities?

There are two scenarios for missing timestamps.

1. Missing activities

Some activities in your process may not be recorded in the data. For example, there may be manual activities (like a phone call) that people perform at their desk. These activities occur in the process but are not visible in the data.

Of course, the process map that you discover using process mining will not show you these manual activities. What you will see is a path from the activity that happened before the manual activity to the activity that happened after the manual activity.

For example, in the process map below you see the sandbox example in Disco. There is a path from activity Create Request for Quotation to Analyze Request for Quotation. However, it could be that there was actually another activity that took place between these two process steps, which is not visible in the data.

Manual activities are not visible in your process map  (click to enlarge)

How to fix:

There is not much you can do here. What is important is to be aware that these activities take place although you cannot see them in the data. Process mining mining cannot be performed without proper domain knowledge about the process you are analyzing. Make sure you talk to the people working in the process to understand what is happening.

You can then take this domain knowledge into account when you interpret your results. For example, in the process above you would know that not all the 21.7 days are actually idle time in the process. Instead, you know that other activities are taking place in between, but you can’t see them in the data. It’s like a blind spot in your process. Typically, with the proper interpretation you are just fine and can complete your analysis based on the data that you have.

However, sometimes the blind spot becomes a problem. For example, you might find that your biggest bottlenecks are in this blind spot and you really need to understand more about what happens there. In this situation, you may choose to go back and collect some manual data about this part of the process either through observation or by asking the employees to document their manual activities for a few weeks. Make sure to record the case ID along with the activities and the timestamps in this endeavor. Afterwards, you can combine the manually collected data with the IT data to analyze the full process, but now with visibility on the blind spot.

2. Missing timestamps for some activities

In a second scenario you actually have information about which activities were performed, but for some of the activities you simply don’t have a timestamp.

For example, in the data snippet from an invoice handling process (see screenshot below – click on image to see a larger version) we can see that in some of the cases an activity Settle dispute with supplier was performed. In contrast to all the other activities, this activity has no timestamp associated. It simply might not have been recorded by the system, or the information about this activity comes from a different system.

Some activities don't have a timestamp  (click to enlarge)

The problem with a data set where some events have a timestamp and others don’t is that the process mining tool cannot infer the sequence of the activities. Normally, the events are ordered based on the timestamps during the import of the data. So, what can you do?

There are essentially three options.

How to fix:

1. Ignoring the events that have no timestamp. This will allow you to analyze the performance of your process but omit all activities that have no timestamp associated (see example below).

2. Importing your data without a timestamp configuration. This will import all events based on the order of the activities from the original file. You will see all activities in the process map, but you will not be able to analyze the waiting times in the process (see example below).

3. You can “borrow” the timestamps of a neighbouring activity and re-use them for the events that do not have any timestamps (for example, the timestamp of their successor activity). This data pre-processing step will allow you to import all events and include all activities in the process map, while preserving the possibility to analyze the performance of your process as well.

Let’s look at how option 1 and 2 look like based on the example above.

First, we can import the data set in the normal way. When the timestamp column is selected, Disco gives you a warning that the timestamp pattern is not matching all rows in the data (see screenshot below). The reason for this mismatch are the empty timestamp fields of the Settle dispute with supplier activity.

Activities without timestamp will not be imported  (click to enlarge)

When you go ahead and import the data anyway, Disco will import only the events that have a timestamp (and sort them based on the timestamps to determine the event sequence for each case). As a result, you get a process map without the Settle dispute with supplier activity (see screenshot below). You can now fully analyze your process also from the performance perspective, but you have a blind spot (similarly to the example scenario discussed at the beginning of the article).

Dispute activity not shown in process map  (click to enlarge)

Let’s say we now want to include the Settle dispute with supplier activity in our process map. For example, we would like to visualize how many cases have a dispute in the first place.

To do this, we import the data again but make sure that no column is configured as a Timestamp in the import screen. For example, we can change the configuration of the ‘Complete Timestamp’ column to an Attribute (see screenshot below). As a result, you will see a warning that no timestamp column has been defined, but you can still import the data. Disco will now use the order of the events in the original file to determine the activity sequences for each case. You should only use this option if the activities are already sorted correctly in your data set.

To include events without timestamps, do not configure a timestamp during import  (click to enlarge)

As a result, the Settle dispute with supplier activity is now displayed in the process map (see screenshot below). We can see that 80 out of 412 cases went through a dispute in the process.

The activities without timestamp will be shown based on their sequence, but without performance information  (click to enlarge)

We can further analyze the process map along with the variants, the number of steps in the process, etc. However, because we have not imported any timestamps, we will not be able to analyze the performance of the process, for example, the case durations or the waiting times in the process map.

To analyze the process performance, and to keep the activities without timestamps in the process map at the same time, you will have to add timestamps for the events that currently don’t have one in your data preparation.

There are no comments for this article yet. Add yours!
Data Quality Problems in Process Mining and What To Do About Them — Part 8: Different Clocks

Mission Control

This is the eighth article in our series on data quality problems for process mining. You can find an overview of all articles in the series here.

In previous articles we have seen how wrong timestamps can mess up everything in process mining: The process flows, the variants, and time measurements like case durations and waiting times in the process map.

One particularly tricky reason for timestamp errors is that the timestamps in your data set may have been recorded by multiple computers that run on different clocks. For example, in this case study at a security services company operators logged their actions when they arrived on-site, identified the problem, etc. on their hand-held devices. These mobile devices sometimes had different local times from the server as well as from each other.

If you look at the scenario below you can see why that is a problem: Let’s say a new incident is reported at the headquarters at 1:30 PM. Five minutes later, a mobile operator responds to the request and indicates that they will go to the location to fix it. However, because the clock on their mobile device is running 10 minutes late, the recorded timestamp indicates 1:25 PM.

When you then combine all the different timestamps in your data set to perform a process mining analysis, you will actually see the response of the operator show up before the initial incident report. Not only does this create incorrect flows in your process map and variants, but when you try to measure the time between the raising of the incident and the first response it will actually give you a negative time.

Process mining scenario with different clocks

So, what can you do when you have data that has this problem?

First, investigate the problem to see whether the clock drift is consistent over time and which activities are affected. Then, you have the following options.

How to fix:

1. If the clock difference is consistent enough you can correct it in your source data. For example, in the scenario above you could add 10 minutes to the timestamps from the local operator.

2. If an overall correction is not possible, you can try to clean your data by removing cases that show up in the wrong order. Note that the Follower filter in Disco also allows you to remove cases, where more or less than a specified amount of time has passed between two activities. This way, you can separate minor clock drift glitches (typically the differences are just a few seconds) from cases where two activities were indeed recorded with a significant time difference. Make sure that the remaining data set is still representative after the cleaning.

3. If nothing helps, you might have to go back to your data collection system and set up a clock synchronization mechanism to constantly measure the time differences between the networked devices and get the correct timestamps while recording the data along the way.

There are no comments for this article yet. Add yours!
Data Quality Problems in Process Mining and What To Do About Them — Part 7: Recorded Timestamps Do Not Reflect Actual Time of Activities 2

Cleaning up

This is the seventh article in our series on data quality problems for process mining. You can find an overview of all articles in the series here.

Last year, a Dutch insurance company completed the process mining analysis of several of their processes. For some processes, it went well and they could get valuable insights out of it. However, for the bulk of their most important core processes, they realized that the workflow system was not used in the way it was intended to be used.

What happened was that the employees took the dossier for a claim to their desk, worked on it there, and put it in a pile with other claims. At the end of the week, they then went to the IT system and logged in the information — Essentially documenting the work they had done earlier.

This way of working has two problems:

  1. It shows that the system is not supporting the case worker in what they have to do. Otherwise they would want to use the system to guide them along. Instead, the documentation in the system is an additional, tedious task that is delayed as much as possible.
  2. Of course, this also means that the timestamps that are recorded in the system do not represent the actual time when the activities in the process really happened. So, doing a process mining analysis based on this data is close to useless.

The company is now working on improving the system to better support their employees, and to — eventually — also be able to restart their process mining initiative again.

You might encounter such problems in different areas. For example, a doctor may be walking around all day, speak with patients, write prescriptions, etc. And then by the end of the day she sits down in her office and writes up the performed tasks for the administrative system. Another example is that the timestamps of a particular process step are manually provided and people make typos when entering them.

So, what can you do if you find that your data has the problem that the recorded time does not reflect the actual time of the activities?

How to fix:

First of all, you need to become aware that your data has this problem. That’s why the data validation step is so important (more on data validation sessions in a later article).

Once you can make an assessment of the severity of the gap between the recorded timestamps in your data and the actual timestamps of the recorded activities, you need to decide whether (a) the problem is localized or predictable, or (b) all-encompassing and too big to analyze the data in any useful way.

If the problem is only affecting a certain activity or part in your process (localized), you may choose to discard these particular activities for not being reliable enough. Afterwards, you can still analyze the rest of the process.

If the offset is not that big and predictable (like the doctor writing up her activities at the end of the day), you can choose to perform your analysis on a more coarse-grained scale. For example, you will know that it does not make sense to analyze the activities of the doctor in the hospital on the hour- or minute-level (even if the recorded timestamps carry the minutes, technically). But you can still analyze the process on a day-level.

Finally, if the problem is too big and you don’t know when any of the activities actually happened (like in the example of the insurance company), you may have to decide that the data is not good enough to use for your process mining analysis at the moment.

There are 2 comments for this article.
Data Quality Problems in Process Mining and What To Do About Them — Part 6: Different Timestamp Granularities

Different Granularities

This is the sixth article in our series on data quality problems for process mining. Make sure you also read the previous articles on formatting errors, missing data, Zero timestamps, wrong timestamp configurations, and same timestamp activities. You can find an overview of all articles in the series here.

In the previous article on same timestamp activities we have seen how timestamps that do not have enough granularity can cause problems. For example, if multiple activities happen at the same day for the same case then they cannot be brought in the right order, because we don’t know in which order they have been performed. Another timestamp-related problem you might encounter is that your dataset has timestamps of different granularities.

Let’s take a look at the example below. The file snippet shows a data set with six different activities. However, only activity ‘Order received’ contains a time (hour and minutes).

Data Sample Process Mining  (click to enlarge)

Note that in this particular example there is no issue with fundamentally different timestamp patterns. However, a typical reason for different timestamp granularities is that these timestamps come from different IT systems. Therefore, they will also often have different timestamp patterns. You can refer to the article How To Deal With Data Sets That Have Different Timestamp Formats to address this problem.

In this article, we focus on the problems that different timestamp granularities can bring. So, why would this be a problem? After all, it is good that we have some more detailed information on at least one step in the process, right? Let’s take a look.

When we import the example data set in Disco, the timestamp pattern is automatically matched and we can pick up the detailed time 20:07 for ‘Order received’ in the first case without a problem (see screenshot below).

Data Import Timestamp Pattern  (click to enlarge)

The problem only becomes apparent after importing the data. We see strange and unexpected flows in the process map. For example, how can it be that in the majority of cases (1587 times) the ‘Order confirmed’ step happened before ‘Order received’?

Discovered Process Map shows unexpected pattern  (click to enlarge)

That does not seem possible. So, we click on the path and use the short-cut Filter this path… to keep only those cases that actually followed this particular path in the process (see screenshot below).

Diving into the process path  (click to enlarge)

We then go to the Cases tab to inspect some example cases (see screenshot below). There, we can immediately see what happened: Both activities ‘Order received’ and ‘Order confirmed’ happened on the same day. However, ‘Order received’ has a timestamp that includes the time while ‘Order confirmed’ only includes the date. For activities that only include the date (like ‘Order confirmed’) the time automatically shows up as “midnight”. Of course, this does not mean that the activity actually happened at midnight. We just don’t know when during the day it was performed.

Inspecting example cases  (click to enlarge)

So, clearly ‘Order confirmed’ must have taken place on the same day after ‘Order received’ (so, after 13:10 in the highlighted example case). However, because we do not know the time of ‘Order confirmed’ (a data quality problem on our end) both activities show up in the wrong order.

How to fix:

If you know the right sequence of the activities, it can make sense to ensure they are sorted correctly (Disco will respect the order in the file for same-time activities) and then initially analyze the process flow on the most coarse-grained level. This will help to get less distracted from those wrong orderings and get a first overview about the process flows on that level.

You can do that by leaving out the hours, minutes and seconds from your timestamp configuration during import in Disco (see an example below in this article).

Later on, when you go into the detailed analysis of parts of the process, you can bring up the level of detail back to the more fine-grained timestamps to see how much time was spent between these different steps.

To make sure that ‘Order confirmed’ activities are not sometimes recorded multiple days earlier (which would indicate other problems), we filter out all other activities in the process and look at the Maximum duration between ‘Order confirmed’ and ‘Order received’ in the process map (see screenshot below). The maximum duration of 23.3 hours confirms our assessment that this wrong activity order appears because of the different timestamp granularities of ‘Order received’ and ‘Order confirmed’.

Confirming Data Problem  (click to enlarge)

So, what can we do about it? In this particular example, the additional time that we get for ‘Order received’ activities does not help that much and causes more confusion than good. To align the timestamp granularities, we choose to omit the time information even when we have it.

To scale back the granularity of all timestamps to just the date is easy: You can simply go back to the data import screen, select the Timestamp column, press the Pattern… button to open the timestamp pattern dialog, and then remove the hour and minute component by simply deleting them from the timestamp pattern (see screenshot below). As you can see on the right side in the matching preview, the timestamp with the time 20:07 is now only picked up as a date (16 December 2015).

Solution: Import Timestamp Pattern with lower granularity  (click to enlarge)

When the data set is imported with this new timestamp pattern configuration, only the dates are picked up and the order of the events in the file is used to determine the order of activities that have the same date within the same case (refer to our article on same timestamp activities for strategies about what to do if the order of your activities is not right).

As a result, the unwanted process flows have disappeared and we now see the ‘Order received’ activity show up before the ‘Order confirmed’ activity in a consistent way (see screenshot below).

Granularity Problem Solved  (click to enlarge)

Scaling back the granularity of the timestamp to the most coarse-grained time unit (as described in the example above) is typically the best way to deal with different timestamp granularities if you have just a few steps in the process that are more detailed than the others.

If your data set, however, contains mostly activities with detailed timestamps and then there are just a few that are more coarse-grained (for example, some important milestone activities might have been extracted from a different data source and only have a date), then it can be a better strategy to artificially provide a “fake time” to these coarse-grained timestamp activities to make them show up in the right order.

For example, you can set them at 23:59 if you want them to go last among process steps at the same day. Or you can give a time that reflects the typical or expected time at which this activity would typically occur.

Be careful if you do this and thoroughly check the resulting data set for problems you might have introduced through this change. Furthermore, it is important to keep in mind that you have created this time when interpreting the durations between activities in your analysis.

There are no comments for this article yet. Add yours!
Automation Platforms and Process Mining: A Powerful Combination


When you need to replace a legacy system by a modern IT system, process mining can help you to capture the full process with all its requirements to ensure a successful transition.1 However, once you have moved the process to the new system, you can continue to use process mining to identify process improvement opportunities.

This is exactly what Zig Websoftware has been doing. Zig creates digital solutions for housing associations. But once their automation platform is running, it also collects data about the executed processes. Based on this data, process mining can be used to analyze the process and substantiate the gut feeling of the process managers with hard data. The beauty of the application of process mining in an automation platform environment is that the insights can be immediately used to make further changes in the process.

Time is Money

One of the first customers for whom Zig has performed a process mining analysis is the Dutch housing association WoonFriesland. With approximately 20,500 rental apartments in the province of Friesland, WoonFriesland wants to offer its tenants good services in addition to good and affordable housing. An optimal and efficient allocation of housing is an important part of this service.

Every day that a rental property is vacant costs a housing association money. Through process mining Zig Websoftware zoomed in on the offering process of WoonFriesland. Some of the questions they wanted to answer were: How long does each step in the allocation process of a property take? What takes longer than necessary, and why? What can be more efficient so that the property can eventually be assigned and rented more quickly? In short, what can be improved and what could be faster. After all, time is money.

The Analysis: Bottlenecks

During the process mining analysis Zig found that much time was lost in the following three areas of the process:

1. The relisting of a property, see (1) in Figure 1
2. The time a house hunter gets to refuse, see (2) in Figure 1
3. The number of times an offer is refused, see (3) in Figure 1

Process Mining Analysis  (click to enlarge)
Figure 1: The time loss is visible in: the relisting of a property (1) the reaction time of a house hunter (2) and the number of times a property is refused (3).

The process map above shows that it takes an average of 16.4 hours to launch a new offer, which has occurred 1622 times. In addition, each offer takes an average of 6 days to be refused. In the meantime, nothing happens with the property and the corporation cannot continue either.

The Solution: Housing Distribution System

To address these problems, WoonFriesland chose to further automate the digital offering process in their system. When a property becomes available, a new offer is automatically launched. This reduces the waiting period from 16.4 hours to 64 minutes (see Figure 2). The ability to offer the property manually remains active, so that WoonFriesland can create new offerings both in the old and in the new way.

Before and after Process Mining Analysis - Bottleneck 2
Figure 2: The automatic offering shortens the waiting time from 16.4 hours to 64 minutes (click on the image to see a larger version).

In addition to the automatic offering, WoonFriesland has also chosen to provide house hunters the option to register their interest in a rental apartment through the website. Once an apartment is offered to a candidate, they can let the housing association know whether they want it or not within three days. This allows WoonFriesland to shorten each refusal by at least 3 days (see Figure 3). Furthermore, the website-based process saves WoonFriesland a lot of time because they do not need to call back every candidate to see if they are still interested.

Before and after Process Mining Analysis - Bottleneck 2  (click to enlarge)
Figure 3: In the old situation a refusal lasted an average of 6 days. Now a house hunter is required to indicate whether there is interest within 3 days (click on the image to see a larger version).

Overall, the new solution has ensured that — with less time and effort — WoonFriesland has a faster turnaround and assigns its properties on average 7 days faster than before. A great result!

This results in significant savings in vacancy costs:

The results of the use of automatic digital offering in the first half year were that, on average, the duration of the advertised 583 properties was approximately 7 days shorter. We are talking about a total of 4000 days. In addition, we have new insights in which areas we could improve the process even more.

— Steffen Feenstra, Information Specialist at WoonFriesland.


WoonFriesland knew there were aspects of the housing allocation process that could be done faster, but they could not precisely tell where the main problem was.

The process mining software Disco allowed Zig Websoftware to substantiate the gut feeling of WoonFriesland with facts and hard figures. The results of the process mining analysis justified the investment in the optimization and further automation of various processes in the apartment allocation of WoonFriesland. As a result, they could significantly reduce their vacancy rate, which allowed WoonFriesland to realize considerable cost savings.


Download Case Study: Automation Platforms and Process Mining - A Powerful Combination

You can download this case study as a PDF here for easier printing or sharing with others.

  1. Read this interview about how Process mining helped to replace a legacy system at a large Australian government authority and this example based on AS/400 IBM systems.  
There are no comments for this article yet. Add yours!
Process Miner of the Year 2016! 1

The Process Miner of the Year Trophy Admired by Eindhoven's cows

Process Mining Camp on 10 June was amazing. More than 210 process mining practitioners from 165 different companies and 20 (!) countries came together to learn from each other. If you could not make it, sign up for the camp mailing list to receive the presentations and video recordings once they become available here.

At the end of the day, we had the pleasure to hand out the very first Process Miner of the Year award. There are now so many more applications of process mining than there were just a few years ago. With the Process Miner of the Year competition, we wanted to stimulate companies to showcase their greatest projects and get recognized for their success.

We received many outstanding submissions, and it was very difficult to choose a winner.

The Winner

Our goal with the Process Miner of the Year awards is to highlight process mining initiatives that are inspiring, captivating, and interesting. Projects that demonstrate the power of process mining, and the transformative impact it can have on the way organizations go about their work and get things done. We hope that learning about these great process mining projects will inspire all of you and show newcomers to the process mining field how powerful process mining can be.

In the end, we decided to give this year’s award to Veco’s Joris Keizers, who — together with five colleagues — had submitted their case. You can watch the video recording of the awards ceremony above.

The reasons why we chose Veco are:

  1. It is inspiring to see a manufacturing process analyzed with process mining — Most of the process mining projects today are performed for service processes,
  2. Their analysis had a huge impact — The lead time of their core production process was cut in half,
  3. The fact that they performed a Measurement System Analysis — Ensuring data validity is very important, and in the process mining space we can learn from the best practices in existing data analysis approaches and methodologies, and
  4. Most importantly, they demonstrated the power of leveraging human knowledge with process mining in a beautiful way in this case — Key people who work in the process but are not necessarily statically versed could be involved in the analysis to contribute.

You can read the full case study here and watch Joris’ presentation at last year’s Process Mining Camp here.

We congratulate Joris and the whole Veco team for their achievement!

The Award

The Process Miner of the Year 2016 Trophy

To signify the achievement of winning the Process Miner of the Year awards, we commissioned a unique, one-of-a-kind trophy. The Process Miner of the Year 2016 trophy is sculpted from two joined, solid blocks of plum and robinia wood, signifying the raw log data used for Process Mining. A horizontal copper inlay points to the value that Process Mining can extract from that log data, like a lode of ore embedded in the rocks of a mine.

It’s a unique piece of art that could not remind us in any better way of the wonderful possibilities that process mining opens up for all of us every day.

Joris received the Process Miner of the Year 2016 trophy on behalf of his team during the awards ceremony at camp.

Process Miner of the Year 2016 Award Ceremony

Submit your own project next year!

We would like to thank all the other process miners who submitted great work as well. And we hope that you will all submit your projects next year, because there will be a new Process Miner of the Year!

There are 1 comments for this article.
Process Mining Does Not Remove Jobs — It Creates New Ones 6


People who have witnessed process mining for the first time are sometimes threatened by the idea that their jobs will go away. They currently manually model and discover processes in workshops and interviews in the traditional way. So, if you can now automate that process discovery, then you don’t need the people anymore who are guiding those process discovery workshop sessions, right?


Process mining is much more than automatically constructing a process map. If you think that is all it does, then you have not understood process mining and how it works in practice.

From Human Computers to Calculators to Spreadsheets

Think back to the time before computers, when computers were actually humans (typically women) who undertook long and often tedious calculations as a team: The replacement of the human computers paved the way for the millions of programmers that we have today. Or think back to the calculator: The calculator was essentially a little computer that you could hold in your hand. Before spreadsheets were around, people had to calculate everything manually, with a calculator. But once they had access to spreadsheets, they were able to do much more than that. They were not just simply doing the same things they were doing before, but in an automated way. Instead, they could now run projections based on compound interest for 10 or 20 years in the future, which simply would not have been feasible by hand.1

The thing is that process mining allows you to look at your processes at a much more detailed level. In a workshop or interview-based setup, you typically get a good overview of the main process — the happy flow. But the big improvement potential typically lies in the 20% that do not go so well. Process mining allows you to get the complete picture and analyze the full process in much more detail. And once you have implemented a change in the process, you can simply re-run the analysis again to see how effective you improvement has actually been.

In many ways, process mining is as revolutionary for processes as spreadsheets were for numbers.

Process Mining Requires Skills

Process mining is not an automated, push-of-a-button exercise. Not at all. It requires a smart analyst who knows how to prepare the data, how to ensure data quality, and who can interpret the results — together with the business.

That’s why also the workshops with the business stakeholders are not going away. As a consultant or in-house analyst you will need their input, because they know the process much better than you do. And you want them to participate and build up ownership of whatever comes out of the project — they are the ones who have to implement the changes after all.

It is one of the most powerful aspects of the traditional workshops that people from different areas get together and realize that they have different and incomplete views of the process, and that they start building a shared understanding. Process mining can be used in exactly the same way. You can run an interactive workshop with the relevant stakeholders at the table and come out with improvement ideas in a very short time. You will just make a better use of their time: Rather than taking weeks to discover how the process works, you can focus on why things are being done the way they are done. And you can dig much deeper.

Process mining takes skills and is not an automated thing. All of you in the business of helping people to understand and improve their processes should start building those skills. Because you will deliver more value and you won’t be less busy at all.

  1. In fact, Dan Bricklin tells exactly such kind of a story in this Business Of Software talk. Back when he was working on VisiCalc, he came into his business school class with a case analysis that was unbelievably detailed and basically impossible to do manually. 
There are 6 comments for this article.
Watch Wil van der Aalst at Process Mining Camp 2015

The preparations for this year’s Process Mining Camp are running at full speed! If you can’t attend but would like to receive the presentations and video recordings afterwards, sign up for the Camp mailing list here.

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Léonard Studer from the City of Lausanne, Willy van de Schoot from Atos International, Joris Keizers from Veco, Mieke Jans from Hasselt University, Bart van Acker from Radboudumc, Edmar Kok from DUO, and Anne Rozinat from Fluxicon.

The last event at Process Mining Camp 2015 was a Fireside chat interview with Prof. Wil van der Aalst from Eindhoven University of Technology. Anne and Wil discussed the success of the Process Mining MOOC on Coursera, why people are struggling with the case ID notion in process mining, how process mining fits into data science in general, and how the process mining field has evolved over time.

Do you want to see the “godfather” of process mining answer our, and the camp audience’s, questions? Watch Wil’s fireside chat now!

There are no comments for this article yet. Add yours!
Older posts »