You are reading Flux Capacitor, the company weblog of Fluxicon.
Here, we write about process intelligence, development, design, and everything that scratches our itch. Hope you like it!

Regular Updates? Use this RSS feed, or subscribe to get emails here.

You should follow us on Twitter here.

Meet The Process Miners of the Year 2017!

At the end of Process Mining Camp this year, we had the pleasure to hand out the annual Process Miner of the Year award for the second time. Carmen Lasa Gómez (left on the photo at the top) from Telefónica received the award on behalf of her co-author Javier García Algarra (middle on the photo at the top) and the whole team.

Congratulations to the team at Telefónica!

The winning contribution from the Telefónica team was a case study about how they discovered operational drifts in their IT service management processes with process mining. Operational drifts are slow changes in the informal culture of groups that are not dramatic enough to produce a sharp impact on quality of service. They are not easy to detect, even for experienced analysts, because they do not change the overall process map.

Learn more about how Carmen and Javier managed to discover these operational drifts in the case study here.

To signify the achievement of winning the Process Miner of the Year award, we commissioned a unique, one-of-a-kind trophy. The Process Miner of the Year 2017 trophy is sculpted from two joined, solid blocks of plum and robinia wood, signifying the raw log data used for Process Mining. A vertical copper inlay points to the value that Process Mining can extract from that log data, like a lode of ore embedded in the rocks of a mine.

It’s a unique piece of art that could not remind us in any better way of the wonderful possibilities that process mining opens up for all of us every day.

Become the Process Miner of the Year 2018!

There are now so many more applications of process mining than there were just a few years ago. With the Process Miner of the Year competition, we want to stimulate companies to showcase their greatest projects and get recognized for their success.

Will you be the Process Miner of the Year 2018? Lear more about how to submit your case study here!

There are no comments for this article yet. Add yours!
Data Quality Problems In Process Mining And What To Do About Them — Missing Complete Timestamps for Ongoing Activities

This is the 13th article in our series on data quality problems for process mining. You can find an overview of all articles in the series here.

If you have ‘start’ and ‘complete’ timestamps in your data set, then you can sometimes encounter situations, where the ‘complete’ timestamp is missing for those activities that are currently still running.

For example, take a look at the data snippet below (click on the image to see a larger version). Two process steps were performed for case ID 1938. The second activity that was recorded for this case is ‘Analyze Purchase Requisition’. It has a ‘start’ timestamp but the ‘complete’ timestamp is empty, because the activity has not yet completed (it is ongoing).

Missing Complete Timestamp (click to enlarge)

In principle, this is not a problem. After importing the data set, you can simply analyze the process map and the variants, etc., as you would usually do. When you look at a concrete case, then the activity duration for the activities that have not completed yet is shown as “instant” (see the history for case ID 1938 in the screenshot below).

Activity duration is instant (click to enlarge)

However, where this does become a problem is when you analyze the activity duration statistics (see screenshot below). The “instant” activity durations influence the mean and the median duration of the activity. So, you want to remove those activities that are still ongoing from the calculation of the activity duration statistics.

The activity duration statistics are affected by this (click to enlarge)

How to fix:

  1. Import your data set again and only configure the complete timestamp as a ‘Timestamp’ column (keep the start timestamp column as an attribute via the ‘Other’ configuration). This will remove all events, where the complete timestamp is missing.
  2. Export your data set as a CSV file and import it again into Disco, now with both the start and the complete timestamp columns configured as ‘Timestamp’ column.

Your activity duration statistics will now only be based on those activities that actually have both a start and a complete timestamp.

There are no comments for this article yet. Add yours!
Dealing With Parallelism in Your Process Maps

Last week, we have seen how you can differentiate between active time and passive time if you have a start and end timestamp in your data set.

If you do have a start and end timestamp in your data, it can also happen that some of the activities are running at the same time. Disco detects parallelism if two activities overlap in time (see illustration below).

In the example above you can see that activity C starts two hours before activity B has ended. Therefore, both activities are shown in parallel in the process map (see left at the top). You can see that for processes that have parallel activities the frequencies do not add up to 100% anymore. For example, after activity A both the path to activity B and C are followed and their frequencies (1 + 1) do not add up to frequency of the previous activity as they would if there was a choice between them.1

Furthermore, the waiting times in the process are now calculated with respect to the previous activities — not the ones that are running in parallel (see top right).

If you have a parallel process, then this is typically what you want. For example, the screenshot below shows a project management process (click on the image to see a larger version of it).

You can see that there are several milestones in the process, such as ‘Install in test environment’. To reach a milestone in this process, several activities need to be completed beforehand but they can be completed in parallel. In the example below we can see that not all the parallel activities are always performed. For example, a ‘Project risk review’ has only be done for 11 out of the 120 cases.

When you switch to the performance view for this process, you can analyze the times of the different parallel paths to perform a Critical Path Analysis. A critical path analysis is only applicable for parallel processes and allows to see which of the parallel branches, if delayed, would delay the next milestone even more.

Challenges with Parallel Processes

Im most situations, if you have parallelism in your process, this is exactly what you want to see. However, there can be some problems related to parallelism as well. For example:

Fortunately, if you find yourself in one of these situations, there is a simple way to get around the parallelism problem: You can import your data set again and configure only one of your timestamps as a ‘Timestamp’ column in Disco (you can keep the other one as an attribute). If you have only one timestamp configured, Disco always shows you a sequential view of your process. Even if two activities have the same timestamp they are shown in sequence with ‘instant’ time between them.

Looking at a sequential view of your process is a great way to investigate the process map and the process variants without being distracted by parallel process parts. You can then always go back and import the data with two timestamps again if you want to analyze the activity durations and the parallel flows.


  1. If you run the animation for this process, you will also see that one token splits into two tokens for the parallel part of the process and then they merge again.  
There are no comments for this article yet. Add yours!
Understanding the Meaning of Your Timestamps

In earlier articles of this series we already discussed how you can change your perspective of the process by how you configure your case ID and activity columns during the import step, and by combining multiple case ID fields and by bringing additional attribute dimensions into your process view.

All of these articles were about changing how you interpret your case and your activity fields. But you can also create different perspectives with respect to the third data requirement for process mining — Your timestamps.

There are two things that you need to keep in mind when you look at the timestamps in your data set:

1. The Meaning of Your Timestamps

Even if you have just one timestamp column in your data set, you need to be really clear about what exactly the meaning of these timestamps is. Does the timestamp indicate that the activity was started, scheduled or completed?

For example, if you look at the following HR process snippet then it looks like the ‘Process automated’ step is a bottleneck: 4.8 days median delay are shown at the big red arrow (see screenshot below).1

However, in fact the timestamps in this data set have the meaning that an activity has become available in the HR workflow tool. This means that at the moment that one completes an activity automatically the next activity is scheduled (and the timestamp is recorded for the newly scheduled activity).

This shifts the interpretation of the bottleneck back to the activity ‘Control request’, which is a step that is performed by the HR department: At the moment that the ‘Control request’ activity was completed, the ‘Process automated’ step was scheduled. So, the big red path shows us the time between when the step ‘Control request’ became available until it was completed.

You can see how knowing that the timestamp in the data set has the meaning of ‘scheduled’ rather than ‘completed’ shifts the interpretation of which activity is causing the delay from the target activity (the activity where the paths is going to) to the source activity (the activity from which the path is starting out).

2. Multiple Timestamp Columns

If you have a start and a complete timestamp column in your data set, then you can include both timestamps during your data import and distinguish active and passive time in your process analysis (see below).

However, sometimes you have even more than two timestamp columns. For example, let’s say that you have a ‘schedule’, a ‘start’ and a ‘complete’ timestamp for each activity. In this case you can choose different combinations of these timestamps to take different perspectives on the performance of your process.

For the example above you have three options.

Option a: Start and Complete timestamps

If you choose the ‘start’ and ‘complete’ timestamps as Timestamp columns during the import step, you will see the time between ‘start’ and ‘complete’ as the activity duration and the times between ‘complete’ and ‘start’ as the waiting times in the performance view (see above).

Option b: Schedule and Complete timestamps

If you choose the ‘schedule’ and ‘complete’ timestamps as Timestamp columns during the import step, you will see the time between ‘schedule’ and ‘complete’ as the activity duration and the times between ‘complete’ and ‘schedule’ as the waiting times in the performance view (see above). So, it shows the time between when an activity became available until it was completed rather than focusing on the time that somebody was actively working on a particular process step.

Option c: Schedule and Start timestamps

If you choose the ‘schedule’ and ‘start’ timestamps as Timestamp columns during the import step, you will see the time between ‘schedule’ and ‘start’ as the activity duration and the times between ‘start’ and ‘schedule’ as the waiting times in the performance view (see above). Here, the activity durations show the time between when an activity became available until it was started.

All of these views can be useful and you can import your data set in different ways to take these different views and answer your analysis questions.

Conclusion

Timestamps are really important in process mining, because they determine the order of the event sequences on which the process maps and variants are based. And they can bring all kinds of problems (see also our series on data quality problems for process mining here).

But the meaning of your timestamps also influences how you should interpret the durations and waiting times in your process map. So, in summary:


  1. Learn more about how to perform a bottleneck analysis with process mining here.  
There are no comments for this article yet. Add yours!
Combining Attributes into Your Process View

Previously, we discussed how you can take different perspectives on your data by choosing what you want to see as your activity name, case ID, and timestamps.

One of the ways in which you can take different perspectives is to bring an additional dimension into your process map by combining more than one column into the activity name. You can do this in Disco by simply configuring more than one column as ‘Activity’ (learn how to do this in the Disco user guide here).

By bringing in an additional dimension, you can “unfold” your process map in a way that does not only show which activities took place in the process, but also in which department, for which problem category, or in which location the activity took place. For example, by bringing in the agent position from your callcenter data set you can see which activities took place in the first level support team and differentiate them from the steps that were performed by the backoffice workers, even if the activity labels for their tasks are the same.

You can experiment with bringing in all kinds of attributes into your process view. When you do this, you can observe two different effects.

1. Comparing Processes

When you bring in a case-level attribute that does not change over the course of the case, you will effectively see the processes for all values of your case-level attribute next to each other — in the same process map. For example, the screenshot below shows a customer refund process for both the Internet and the Callcenter channel next to each other.

Seeing two or more processes next to each other in one picture side by side can be an alternative to filtering the process in this dimension. Of course, you can still apply filters to only compare a few of the processes at once.

2. Unfolding Single Activities

When you have an attribute that is only filled for certain events, then bringing in this attribute into your activity name will only unfold the activities for which it is filled.

For example, a document authoring process may consist of the steps ‘Create’, ‘Update’, ‘Submit’, ‘Approve’, ‘Request rework’, ‘Revise’, ‘Publish’, and ‘Discard’ (performed by different people such as authors and editors). Imagine that in this document authoring process, you have additional information in an extra column about the level of required rework (major vs. minor) in the ‘Request rework’ step.

If you just use the regular process step column as your activity, then ‘Request rework’ will show up as one activity node in your process map (see image below).

However, if you include the ‘Rework type’ attribute in the activity name, then two different process steps ‘Request rework – major’ and ‘Request rework – minor’ will appear in the process map (see below).

This can be handy in many other processes. For example, think of a credit application process that has a ‘Reject reason’ attribute that provides more information about why the application was rejected. Unfolding the ‘Reject’ activity in the ‘Reject reason’ dimension will enable you to visualize the different types of rejections right in the process map in a powerful way.

Conclusion

So, already while you are in the stage of preparing your data set it is worth thinking about how you can best structure your attribute data.

As a rule of thumb:

There are no comments for this article yet. Add yours!
Combining Multiple Columns as Case ID

In a previous article, we discussed how you can take different perspectives on your data by choosing what you want to see as your activity name, case ID, and timestamps.

One of the examples was about changing the perspective of what we see as a case. The case determines the scope of the process: Where does the process start and where does it end?

You can think of a case as the streaming object that is moving through the process. For example, the travel ticket in the picture above might go through the steps ‘Purchased’, ‘Printed’, ‘Scanned’ and ‘Validated’. If you want to look at the process flow of travel tickets, you would choose the travel ticket number as your case ID.

In the previous article we saw how you can change the focus from one case ID to another. For example, in a call center process you can look at the process from the perspective of a service request or from the perspective of a customer. Both are valid views and offer different perspectives on the same process.

Another option you should keep in mind is that, sometimes, you might also want to combine multiple columns into the case ID for your process mining analysis.

For example, if you look at the callcenter data snippet below then you can see that the same customer contacts the helpdesk about different products. So, while we want to analyze the process from a customer perspective, perhaps it would be good to distinguish those cases for the same customer?

Let’s look at the effect of this choice based on the example. First, we only use the ‘Customer ID’ as our case ID during the import step. As a result, we can see that all activities that relate to the same customer will be combined in the same case (‘Customer 3’).

If we now want to distinguish cases, where the same customer got support on different products, we can simply configure both the ‘Customer ID’ and the ‘Product’ column as case ID columns in Disco (you can see the case ID symbol in the header of both columns in the screenshot below):

The effect of this choice is that both fields’ values are concatenated (combined) in the case ID value. So, instead of one case ‘Customer 3’ we now get two cases: ‘Customer 3 – MacBook Pro’ and ‘Customer 3 – iPhone’ (see below).

There are many other situations, where combining two or more fields into the case ID can be necessary. For example, imagine that you are analyzing the processing of the tax returns at the tax office. Each citizen is identified by a unique social security number. This could be the case ID for your process, but if you have data from multiple years then you also need the year to separate the returns from the same citizen across the years.

To create a unique case identifier, you can simply configure all the columns that should be included in the case ID as a ‘Case’ column like shown above, and Disco will automatically concatenate them for the case ID.

As before, there is not one right and one wrong answer about how you should configure your data import but it depends on how you want to look at your process and which questions you want to answer. Often, you will end up creating multiple views and all of them are needed to get the full picture.

There are no comments for this article yet. Add yours!
When Incomplete Cases Shouldn’t Be Removed

This is the fourth and last article in our series on how to deal with incomplete cases in process mining. You can find an overview of all articles in the series here.

There are also situations in which you should not remove incomplete cases from your data set. Here are two examples:

Finally, do not forget to assess the representativeness of your data set after you have removed your incomplete cases. For example, if it appears that 80% of your cases are incomplete then it would be very dangerous to base your process analysis on the remaining 20%!

If you do not have enough completed cases in your data set, you may need to go back and request a larger data sample from a longer time period to be able to get representative results.

There are no comments for this article yet. Add yours!
The Different Meanings of “Finished”

This is the third article in our series on how to deal with incomplete cases in process mining. You can find an overview of all articles in the series here.

Once you have determined what your startpoints and what your endpoints are, you still need to think about what “finished” or “completed” actually means for your process.

Multiple interpretations are possible and the differences can be subtle, but you will need to use different filters depending on the meaning that you want to apply. The results will be different and you need to be clear about which meaning is right for your data set.

Here are four examples for how you can filter incomplete cases. It’s not that any of these are better or more appropriate than others in general. Instead, it depends on your process and on the meaning of “finished” that you want to choose.

Ended In

Perhaps the most common meaning of “finished” is to look at which activities have occurred as the very last activity (for end points) or as the very first activity (for start points) in a case.

This corresponds to the dashed lines that you see in the process map and you can use the Endpoints Filter in Discard cases mode to filter all cases that start or end with a particular set of activities (see Figure 1).

Figure 1: Use the use the Endpoints Filter in Discard cases mode to filter all cases that start or end with a particular set of activities.

When you add this filter, only the activities that occurred as the very first event in any of the cases are shown in the ‘Start event values’ on the left and only activities that occurred as the very last event in any of the cases are shown in the ‘End event values’ on the right.

You can then select only the regular start and end activities that you have identified in the previous step to focus on your completed cases. For example, if we only select the ‘Order completed’ activity as a regular end point for our refund process, then the remaining data set will only contain the 333 cases that actually ended with ‘Order completed’. If you use the shortcut ‘Filter for this start/end activity’ after clicking on a dashed line in the process map, Disco will automatically add a pre-configured Endpoints filter to your data set.

To use your filtered data set as the new reference point for your further analysis, you can enable the checkbox ‘Apply filters permanently’ after pressing the ‘Copy and filter’ button. The outcome of applying the filter will be the same (the same 333 cases remain), but the applied filter will be consolidated in a new data set, so that successive analyses use this new baseline as the new 100% of cases.

Reached Milestone

Sometimes, the very last activity that happened in a case is not the best way to determine whether a case has been completed or not.

For example, after completing an order there might be back-end activities such as archiving or other documentation steps that occur later. In these cases, ‘Order completed’ will not be the very last step in the process (so, the case would not be picked up if you use the Endpoints filter).

Figure 2: Use the Attribute Filter in Mandatory mode to filter cases that have passed a certain milestone in the process.

If you are mainly concerned that one or more milestone activities that indicate the completion of your process have occurred or not, you can use the Attribute Filter in Mandatory mode (see Figure 2). This way, you determine all cases where any of the selected activities has happened, but you don’t care whether they were the very last step in the process or whether other activities were recorded afterwards.

Instead of manually adding this filter, you can also use the shortcut Filter this activity… after clicking on the activity in the process map. Disco will automatically add a pre-configured Attribute Filter in Mandatory mode to your data set with the right activity already selected.

If we apply this meaning of “finished” based on the milestone activity ‘Order completed’ for the refund process, we get a slightly different outcome compared to the Endpoints Filter before. Instead of 333 cases, there now remain 334 cases after applying the filter and we can see that the additional case ended with the activity ‘Warehouse’ (see Figure 3).

Figure 3: One additional case remains after changing the meaning of the finished cases from the Endend In to the Reached Milestone semantics.

If we now click on this dashed line leading from the ‘Warehouse’ activity and use the short-cut to investigate this case in more detail, we can see in the history of the case that the activity ‘Order completed’ did indeed occur. However, it occurred in the middle of the process after the order was initially rejected. Then, the case got picked up again and the refund was actually granted (see Figure 4).

Figure 4: The additional case did perform the step ‘Order completed’, but ‘Order completed’ was not the very last step in the process.

Cut Off

In another scenario, you might be analyzing the refund process from a customer perspective: This is a process that the customers of an electronics manufacturer go through after the product that they purchased was broken and they now want to get their money back. So, from the customer’s point of view the process is “finished” as soon as they have received their refund.

To analyze the data from this perspective, we can focus on the three payment activities ‘Payment issued’, ‘Refund issued’ and ‘Special Refund issued’ (see Figure 5).

Figure 5: From the customer’s perspective the process is finished as soon as one of the payment activities has occurred.

If we search for these activities in the process map, then we can see that there are several activities that happen afterwards. Sometimes, the delays in the back-end processing can be quite long (for example, 7.5 days on average after the ‘Payment issued’ step), but from the customer’s perspective this delay is not relevant.

So, to focus our analysis on the part of the process that is relevant for the customer, we can use the Endpoints Filter in Trim longest mode (see Figure 6).

Figure 6: Use the the Endpoints Filter in Trim longest mode to focus on a segment of the process.

When we change the Endpoints Filter mode from Discard cases to Trim longest, then all of the activities become available as ‘Start event values’ on the left and as ‘End event values’ on the right. We can now select only the three payment activities as the customer endpoints in our process.

As a result, everything that happened after any of these three payment activities is cut off. We can see that the customer payments now appear as the endpoints in our process map (see Figure 7).

Figure 7: We have created three new endpoints for the process segment that we want to focus on.

The cases that remain in the data set after applying the filter are the same ones as if we would have used the Attribute filter in ‘Mandatory’ mode. But cutting off all activities after the payments enables us to focus our process analysis on the part of the process that is relevant from the customer’s perspective:

Open for longer than X

There might be activities in your process that can be considered an endpoint if there has been a certain period of inactivity afterwards (see also Reason No. 3 at the beginning of this series). For example, we can request missing information (like the purchase receipt) from a customer to handle their refund order but the customer might not get back to us.

If we want to focus on cases where the activity ‘Missing documents requested’ was the last step in the process but nothing has happend for a month, we can use a combination of filters in the following way.

First, we add an Endpoints filter as shown in Figure 8.

Figure 8: To filter out cases that have been open for a certain time, we first add an Endpoints Filter.

Then, we add a second filter by clicking the ‘click to add filter…’ button again and we add a Timeframe filter on top of it (see Figure 9).

Figure 9: Then, we add a Timeframe filter that focuses on cases that have had a certain period of inactivity since the last step.

By adapting the selected timeframe in such a way that the past month is not covered, we will only keep those cases that did end with ‘Missing documents requested’ and where that last step took place more than one month ago.

There are no comments for this article yet. Add yours!
How To Determine The Start and End Points For Your Process

This is the second article in our series on how to deal with incomplete cases in process mining. You can find an overview of all articles in the series here.

Once you start analyzing your data set for incomplete cases, you need to determine what the expected start and end points in your process are. Typically, you do this by looking at which activities appear to be the last step in the process (look at the dashed lines in your process map) and by using your domain knowledge about the process.

In the refund process, we have already identified one possible regular endpoint in the activity ‘Order completed’. But are there other regular end points as well? For example, by digging deeper in the data we find that there is another activity ‘Cancelled’ that also appears as the last step in the process. From the name ‘Cancelled’ we can guess what this step means (the processing of the refund order has been stopped). The question is whether we consider ‘Cancelled’ a regular end point in the process, or whether we would rather remove cancelled cases from our process analysis?

The answer to this question depends on the questions that you want to answer in you process mining analysis. Furthermore, you typically need domain knowledge to definitively clarify how the process end points should be interpreted. It is fine for you as the process analyst to take some initial guesses, but it is critical that you document your assumptions along the way and verify them with a domain expert later on (see Data Validation Session).

If you have no idea at all which activities could be candidates for a start or end point in your process, there are two tricks you can try out to see if they help:

  1. Work from the process map and click on one of the dashed lines leading to the endpoint (see Figure 1). If the case frequency is the same as the end frequency (or very close) then this is a hint that the activity might be an end point in the process, because there is never anything happening afterwards. The same can be done with the start activities by clicking on the dashed lines leading from the start point.

    Figure 1: Click on the dashed line and press the Filter for this end activity… button to investigate the cases that end in a certain place.

    To investigate some example cases with a particular end point in more detail, click on the shortcut ‘Filter for this end activity…’ and apply the pre-configured Endpoints filter that Disco has added.

    a) If you should decide that this activity is a regular end point in the process, remove the filter again from the filter stack, apply the updated filter settings, and continue looking at the next dashed line in the process map.

    b) If you should decide that cases that end with this activity are incomplete, invert the selection of the Endpoints filter and apply it to remove all cases that end there. Then, continue looking at the remaining data set and click on the next dashed line in the process map.

    By gradually removing end points that you consider incomplete, more and more end points that are currently hidden due to the low ‘Paths’ slider will appear until you have investigated all endpoints (keep pulling up the ‘Paths’ slider until you have seen them all) and have decided which to keep and which to remove.

  2. The second trick only works if you have data covering a large enough timeframe compared to the case durations in your data set. But if you do, try to apply a Timeframe filter before investigating the start and end points as described above in the following way:

    To investigate the process endpoints, add a Timeframe filter and cover the first half of the timeframe (see Figure 2). As a result, only cases where there has been no further activity for the latter half of the time of your data set remain. Therefore, the end activities that are revealed through the dashed lines leading to the end point in the process map are much more likely to be actual endpoints in the process. In a way, you can think of it as having excluded those cases that just performed some kind of intermediary step yesterday, or a few days before the end of the data set.

    Figure 2: Filter for cases that have been inactive for a certain amount of time.

    To investigate the process startpoints, you can do the same but configure the Timeframe filter in such a way that it covers only the latter half of the timeline. This way, start points that emerge only because cases have been started shortly before the start of the data set timeframe will be excluded.

There are no comments for this article yet. Add yours!
How To Deal With Incomplete Cases in Process Mining 5

[This article previously appeared in the Process Mining News – Sign up now to receive regular articles about the practical application of process mining.]

Before you start with your process mining analysis, you need to assess whether your data is suitable for process mining and check your data for data quality problems (see also our Data Quality series here). Afterwards, one of the next steps is to understand how you can differentiate between complete and incomplete cases in your process.

An ‘incomplete case’ is a case where either the start or the end of the process is missing. There can be different reasons for why a case is incomplete, such as:

  1. Your data extraction method has retrieved only events in a certain timeframe. For example, let’s say that you have extracted all the process steps that were performed in a particular year. Some cases may have actually started in the previous year (before January). Furthermore, some cases may have started in the year that you are looking at but continued until the next year (after December). In this situation, you will only see the part of these cases that took place in the year that you are analyzing.
  2. Some cases have not finished yet. Even if you have extracted all the data there is, some of the cases may not have finished yet. This means that, if you are extracting your process mining data today, some of the cases may have started recently and did not yet progress until the end of the process. They are still “somewhere in the middle”. If you would wait for a few weeks with your data extraction, then these cases would probably be finished, but then there might be new ones that have just recently started!
  3. Some cases might never finish. You may have a clear picture of how your process should go. But a customer might not get back to you as you expected, a supplier might never send you the data that was needed to sign them up, or a colleague might close a case in an unexpected phase, because there was an error, a duplicate or another problem with it detected. These cases do not end at any of the expected end points, but they will never be finished even if you waited for ages. The same can be true for the start points.

Looking for incomplete cases is a standard step that you should always take before you dive into your actual process mining analysis. In this four-part series, we will give you clear guidelines for how to deal with incomplete cases.

The following topics will be covered:

Let’s get started!

Why Incomplete Cases Can Be Problematic

At first, it might not be obvious why incomplete cases are a problem in the first place. This is what the data shows, so my process mining analysis should show what actually happened, right?

Wrong. At least as far as incomplete cases are concerned: If your data has incomplete cases because of Reason No. 1 or Reason No. 2 (see above), then these missing start or end points are not reflecting the actual process, but they occur due to the way that the data was collected.

Take a look at the customer refund process picture below: The dashed lines leading to the endpoint (the square symbol at the bottom of the process map) indicate which activities happened as the very last step in the process. For example, for 333 cases ‘Order completed’ was the very last step that was recorded – See (1) in Figure 1. This seems to be a plausible end point for the process. However, there were also 20 cases for which the activity ‘Invoice modified’ was the very last step that was observed – See (2) in Figure 1. This does not seem like an actual end point of the process, does it?

Figure 1: Cases ending with Order completed (1) seem to be finished, but cases where Invoice modified was the last step that happened (2) might still be ongoing?

If we look up an example case that ends with ‘Invoice modified’ (see Figure 2), then we can see that the ‘Invoice modified’ step indeed happened just before the end of the data set. It occurred on 20 January 2012 and the data set ends on 23 January 2012. What if we had data until June 2012? Would there have been any steps after ‘Invoice modified’ then?

Figure 2: If an incomplete case stops at a particular point, it could just mean that we have not yet observed the next step.

So, we can see that not all end points in the data necessarily need to be meaningful endpoints in the process. Some cases can be incomplete, just because we are missing the end or the beginning of what actually happened, either because of how the data was extracted or because we don’t know yet what is going to happen with cases that are still ongoing. When you look at your process map, or the variants, for a data set that includes incomplete cases then the map and the variants do not show you the actual start and end points in your process but the start and end points in your data.

Another problem with incomplete cases is that their case duration can be misleading. The process mining tool does not know which cases are finished and which are incomplete. Therefore, it always calculates the case duration as the time between the very first and the very last event in the case.

As a result, the case durations of incomplete cases appear shorter in the process mining tool than the throughput time of the cases they represent has actually been. Let’s take a look at another example case in the process to understand what this means (see Figure 3). The shown Case72 seems to be very fast. There were just two steps in the process so far (‘Order created’ and ‘Missing documents requested’) and it took just 3 minutes.

However, when you consider that ‘Missing documents requested’ is not the actual end point of this process (we are just in an intermediate state, waiting for the customer to send us some additional information) and we look at the timeline of where this case sits, then we can see that this case has been open for more than 1 month. So, the true throughput time of this case (so far) should be at least 1 month and 3 minutes!

Figure 3: Incomplete cases can appear much faster than they really are.

If you simply leave incomplete cases in your data set, then calculations like the average or median case duration in the statistics view of your process are influenced by these shorter durations. So, not only the process map and the variants are influenced by incomplete cases but also your performance measurements are impacted.

Therefore, you need to investigate incomplete cases in your data before you start with your actual analysis. You want to understand what kind of incomplete cases you have and how many there are. Then, you want to remove them from your data set before you analyze your process in more detail. You can do all this right in Disco and in the remainder of this series we will show you how to do it.

Finally, some data sets may be extracted in such a way that there are no incomplete cases in it. For example, you may have received a data set from your IT department that only contains closed orders. So, any orders that are still open do not show up in your data.

In this situation, you don’t need to remove incomplete cases anymore. However, you should realize that you do not have visibility into how representative your data set is with respect to the whole population of orders. Understanding how many cases remain after removing your incomplete cases is an important step. Be aware of this limitation and consider requesting the set of open cases from the same period in addition to your current data set to be able to check them and to make sure you get the full picture.

There are 5 comments for this article.
« Newer posts
Older posts »