You are reading Flux Capacitor, the company weblog of Fluxicon.
Here, we write about process intelligence, development, design, and everything that scratches our itch. Hope you like it!

Regular Updates? Use this RSS feed, or subscribe to get emails here.

You should follow us on Twitter here.

Privacy, Security and Ethics in Process Mining — Part 2: Responsible Handling of Data

This is the 2nd article in our series on privacy, security and ethics in process mining. You can find an overview of all articles in the series here.

Like in any other data analysis technique, you must be careful with the data once you have obtained it. In many projects, nobody thinks about the data handling until it is brought up by the security department. Be that person who thinks about the appropriate level of protection and has a clear plan already prior to the collection of the data.

Do:

Don’t:

There are no comments for this article yet. Add yours!
Privacy, Security and Ethics in Process Mining — Part 1: Clarify Your Goal

[This article previously appeared in the Process Mining News – Sign up now to receive regular articles about the practical application of process mining.]

When I moved to the Netherlands 12 years ago and started grocery shopping at one of the local supermarket chains, Albert Heijn, I initially resisted getting their Bonus card (a loyalty card for discounts), because I did not want the company to track my purchases. I felt that using this information would help them to manipulate me by arranging or advertising products in a way that would make me buy more than I wanted to. It simply felt wrong.

The truth is that no data analysis technique is intrinsically good or bad. It is always in the hands of the people using the technology to make it productive and constructive. For example, while supermarkets could use the information tracked through the loyalty cards of their customers to make sure that we have to take the longest route through the store to get our typical items (passing by as many other products as possible), they can also use this information to make the shopping experience more pleasant, and to offer more products that we like.

Most companies have started to use data analysis techniques to analyze their data in one way or the other. These data analyses can bring enormous opportunities for the companies and for their customers, but with the increased use of data science the question of ethics and responsible use also grows more dominant. Initiatives like the Responsible Data Science seminar series1 take on this topic by raising awareness and encouraging researchers to develop algorithms that have concepts like fairness, accuracy, confidentiality, and transparency built in2.

Process Mining can provide you with amazing insights about your processes, and fuel your improvement initiatives with inspiration and enthusiasm, if you approach it in the right way. But how can you ensure that you use process mining responsibly? What should you pay attention to when you introduce process mining in your own organization?

In this article series, we provide you four guidelines that you can follow to prepare your process mining analysis in a responsible way.

1. Clarify Goal of the Analysis (this article)
2. Responsible Handling of Data
3. Consider Anonymization (coming soon)
4. Establish a Collaborative Culture (coming soon)

1. Clarify Goal of the Analysis

The good news is that in most situations Process Mining does not need to evaluate personal information, because it usually focuses on the internal organizational processes rather than, for example, on customer profiles. Furthermore, you are investigating the overall process patterns. For example, a process miner is typically looking for ways to organize the process in a smarter way to avoid unnecessary idle times rather than trying to make people work faster.

However, as soon as you would like to better understand the performance of a particular process, you often need to know more about other case attributes that could explain variations in process behaviours or performance. And people might become worried about where this will leave them.

Therefore, already at the very beginning of the process mining project, you should think about the goal of the analysis. Be clear about how the results will be used. Think about what problem you are trying to solve and what data you need to solve this problem.

Do:

Don’t:


  1. Responsible Data Science (RDS) initiative: http://www.responsibledatascience.org  
  2. Watch Wil van der Aalst’s presentation on Responsible Data Science at Process Mining Camp 2016: https://www.youtube.com/watch?v=ewQbmINuXeU  
There are no comments for this article yet. Add yours!
Meet The Process Miners of the Year 2017!

At the end of Process Mining Camp this year, we had the pleasure to hand out the annual Process Miner of the Year award for the second time. Carmen Lasa Gómez (left on the photo at the top) from Telefónica received the award on behalf of her co-author Javier García Algarra (middle on the photo at the top) and the whole team.

Congratulations to the team at Telefónica!

The winning contribution from the Telefónica team was a case study about how they discovered operational drifts in their IT service management processes with process mining. Operational drifts are slow changes in the informal culture of groups that are not dramatic enough to produce a sharp impact on quality of service. They are not easy to detect, even for experienced analysts, because they do not change the overall process map.

Learn more about how Carmen and Javier managed to discover these operational drifts in the case study here.

To signify the achievement of winning the Process Miner of the Year award, we commissioned a unique, one-of-a-kind trophy. The Process Miner of the Year 2017 trophy is sculpted from two joined, solid blocks of plum and robinia wood, signifying the raw log data used for Process Mining. A vertical copper inlay points to the value that Process Mining can extract from that log data, like a lode of ore embedded in the rocks of a mine.

It’s a unique piece of art that could not remind us in any better way of the wonderful possibilities that process mining opens up for all of us every day.

Become the Process Miner of the Year 2018!

There are now so many more applications of process mining than there were just a few years ago. With the Process Miner of the Year competition, we want to stimulate companies to showcase their greatest projects and get recognized for their success.

Will you be the Process Miner of the Year 2018? Lear more about how to submit your case study here!

There are no comments for this article yet. Add yours!
Data Quality Problems In Process Mining And What To Do About Them — Missing Complete Timestamps for Ongoing Activities

This is the 13th article in our series on data quality problems for process mining. You can find an overview of all articles in the series here.

If you have ‘start’ and ‘complete’ timestamps in your data set, then you can sometimes encounter situations, where the ‘complete’ timestamp is missing for those activities that are currently still running.

For example, take a look at the data snippet below (click on the image to see a larger version). Two process steps were performed for case ID 1938. The second activity that was recorded for this case is ‘Analyze Purchase Requisition’. It has a ‘start’ timestamp but the ‘complete’ timestamp is empty, because the activity has not yet completed (it is ongoing).

Missing Complete Timestamp (click to enlarge)

In principle, this is not a problem. After importing the data set, you can simply analyze the process map and the variants, etc., as you would usually do. When you look at a concrete case, then the activity duration for the activities that have not completed yet is shown as “instant” (see the history for case ID 1938 in the screenshot below).

Activity duration is instant (click to enlarge)

However, where this does become a problem is when you analyze the activity duration statistics (see screenshot below). The “instant” activity durations influence the mean and the median duration of the activity. So, you want to remove those activities that are still ongoing from the calculation of the activity duration statistics.

The activity duration statistics are affected by this (click to enlarge)

How to fix:

  1. Import your data set again and only configure the complete timestamp as a ‘Timestamp’ column (keep the start timestamp column as an attribute via the ‘Other’ configuration). This will remove all events, where the complete timestamp is missing.
  2. Export your data set as a CSV file and import it again into Disco, now with both the start and the complete timestamp columns configured as ‘Timestamp’ column.

Your activity duration statistics will now only be based on those activities that actually have both a start and a complete timestamp.

There are no comments for this article yet. Add yours!
Dealing With Parallelism in Your Process Maps

Last week, we have seen how you can differentiate between active time and passive time if you have a start and end timestamp in your data set.

If you do have a start and end timestamp in your data, it can also happen that some of the activities are running at the same time. Disco detects parallelism if two activities overlap in time (see illustration below).

In the example above you can see that activity C starts two hours before activity B has ended. Therefore, both activities are shown in parallel in the process map (see left at the top). You can see that for processes that have parallel activities the frequencies do not add up to 100% anymore. For example, after activity A both the path to activity B and C are followed and their frequencies (1 + 1) do not add up to frequency of the previous activity as they would if there was a choice between them.1

Furthermore, the waiting times in the process are now calculated with respect to the previous activities — not the ones that are running in parallel (see top right).

If you have a parallel process, then this is typically what you want. For example, the screenshot below shows a project management process (click on the image to see a larger version of it).

You can see that there are several milestones in the process, such as ‘Install in test environment’. To reach a milestone in this process, several activities need to be completed beforehand but they can be completed in parallel. In the example below we can see that not all the parallel activities are always performed. For example, a ‘Project risk review’ has only be done for 11 out of the 120 cases.

When you switch to the performance view for this process, you can analyze the times of the different parallel paths to perform a Critical Path Analysis. A critical path analysis is only applicable for parallel processes and allows to see which of the parallel branches, if delayed, would delay the next milestone even more.

Challenges with Parallel Processes

Im most situations, if you have parallelism in your process, this is exactly what you want to see. However, there can be some problems related to parallelism as well. For example:

Fortunately, if you find yourself in one of these situations, there is a simple way to get around the parallelism problem: You can import your data set again and configure only one of your timestamps as a ‘Timestamp’ column in Disco (you can keep the other one as an attribute). If you have only one timestamp configured, Disco always shows you a sequential view of your process. Even if two activities have the same timestamp they are shown in sequence with ‘instant’ time between them.

Looking at a sequential view of your process is a great way to investigate the process map and the process variants without being distracted by parallel process parts. You can then always go back and import the data with two timestamps again if you want to analyze the activity durations and the parallel flows.


  1. If you run the animation for this process, you will also see that one token splits into two tokens for the parallel part of the process and then they merge again.  
There are no comments for this article yet. Add yours!
Understanding the Meaning of Your Timestamps

In earlier articles of this series we already discussed how you can change your perspective of the process by how you configure your case ID and activity columns during the import step, and by combining multiple case ID fields and by bringing additional attribute dimensions into your process view.

All of these articles were about changing how you interpret your case and your activity fields. But you can also create different perspectives with respect to the third data requirement for process mining — Your timestamps.

There are two things that you need to keep in mind when you look at the timestamps in your data set:

1. The Meaning of Your Timestamps

Even if you have just one timestamp column in your data set, you need to be really clear about what exactly the meaning of these timestamps is. Does the timestamp indicate that the activity was started, scheduled or completed?

For example, if you look at the following HR process snippet then it looks like the ‘Process automated’ step is a bottleneck: 4.8 days median delay are shown at the big red arrow (see screenshot below).1

However, in fact the timestamps in this data set have the meaning that an activity has become available in the HR workflow tool. This means that at the moment that one completes an activity automatically the next activity is scheduled (and the timestamp is recorded for the newly scheduled activity).

This shifts the interpretation of the bottleneck back to the activity ‘Control request’, which is a step that is performed by the HR department: At the moment that the ‘Control request’ activity was completed, the ‘Process automated’ step was scheduled. So, the big red path shows us the time between when the step ‘Control request’ became available until it was completed.

You can see how knowing that the timestamp in the data set has the meaning of ‘scheduled’ rather than ‘completed’ shifts the interpretation of which activity is causing the delay from the target activity (the activity where the paths is going to) to the source activity (the activity from which the path is starting out).

2. Multiple Timestamp Columns

If you have a start and a complete timestamp column in your data set, then you can include both timestamps during your data import and distinguish active and passive time in your process analysis (see below).

However, sometimes you have even more than two timestamp columns. For example, let’s say that you have a ‘schedule’, a ‘start’ and a ‘complete’ timestamp for each activity. In this case you can choose different combinations of these timestamps to take different perspectives on the performance of your process.

For the example above you have three options.

Option a: Start and Complete timestamps

If you choose the ‘start’ and ‘complete’ timestamps as Timestamp columns during the import step, you will see the time between ‘start’ and ‘complete’ as the activity duration and the times between ‘complete’ and ‘start’ as the waiting times in the performance view (see above).

Option b: Schedule and Complete timestamps

If you choose the ‘schedule’ and ‘complete’ timestamps as Timestamp columns during the import step, you will see the time between ‘schedule’ and ‘complete’ as the activity duration and the times between ‘complete’ and ‘schedule’ as the waiting times in the performance view (see above). So, it shows the time between when an activity became available until it was completed rather than focusing on the time that somebody was actively working on a particular process step.

Option c: Schedule and Start timestamps

If you choose the ‘schedule’ and ‘start’ timestamps as Timestamp columns during the import step, you will see the time between ‘schedule’ and ‘start’ as the activity duration and the times between ‘start’ and ‘schedule’ as the waiting times in the performance view (see above). Here, the activity durations show the time between when an activity became available until it was started.

All of these views can be useful and you can import your data set in different ways to take these different views and answer your analysis questions.

Conclusion

Timestamps are really important in process mining, because they determine the order of the event sequences on which the process maps and variants are based. And they can bring all kinds of problems (see also our series on data quality problems for process mining here).

But the meaning of your timestamps also influences how you should interpret the durations and waiting times in your process map. So, in summary:


  1. Learn more about how to perform a bottleneck analysis with process mining here.  
There are no comments for this article yet. Add yours!
Combining Attributes into Your Process View

Previously, we discussed how you can take different perspectives on your data by choosing what you want to see as your activity name, case ID, and timestamps.

One of the ways in which you can take different perspectives is to bring an additional dimension into your process map by combining more than one column into the activity name. You can do this in Disco by simply configuring more than one column as ‘Activity’ (learn how to do this in the Disco user guide here).

By bringing in an additional dimension, you can “unfold” your process map in a way that does not only show which activities took place in the process, but also in which department, for which problem category, or in which location the activity took place. For example, by bringing in the agent position from your callcenter data set you can see which activities took place in the first level support team and differentiate them from the steps that were performed by the backoffice workers, even if the activity labels for their tasks are the same.

You can experiment with bringing in all kinds of attributes into your process view. When you do this, you can observe two different effects.

1. Comparing Processes

When you bring in a case-level attribute that does not change over the course of the case, you will effectively see the processes for all values of your case-level attribute next to each other — in the same process map. For example, the screenshot below shows a customer refund process for both the Internet and the Callcenter channel next to each other.

Seeing two or more processes next to each other in one picture side by side can be an alternative to filtering the process in this dimension. Of course, you can still apply filters to only compare a few of the processes at once.

2. Unfolding Single Activities

When you have an attribute that is only filled for certain events, then bringing in this attribute into your activity name will only unfold the activities for which it is filled.

For example, a document authoring process may consist of the steps ‘Create’, ‘Update’, ‘Submit’, ‘Approve’, ‘Request rework’, ‘Revise’, ‘Publish’, and ‘Discard’ (performed by different people such as authors and editors). Imagine that in this document authoring process, you have additional information in an extra column about the level of required rework (major vs. minor) in the ‘Request rework’ step.

If you just use the regular process step column as your activity, then ‘Request rework’ will show up as one activity node in your process map (see image below).

However, if you include the ‘Rework type’ attribute in the activity name, then two different process steps ‘Request rework – major’ and ‘Request rework – minor’ will appear in the process map (see below).

This can be handy in many other processes. For example, think of a credit application process that has a ‘Reject reason’ attribute that provides more information about why the application was rejected. Unfolding the ‘Reject’ activity in the ‘Reject reason’ dimension will enable you to visualize the different types of rejections right in the process map in a powerful way.

Conclusion

So, already while you are in the stage of preparing your data set it is worth thinking about how you can best structure your attribute data.

As a rule of thumb:

There are no comments for this article yet. Add yours!
Combining Multiple Columns as Case ID

In a previous article, we discussed how you can take different perspectives on your data by choosing what you want to see as your activity name, case ID, and timestamps.

One of the examples was about changing the perspective of what we see as a case. The case determines the scope of the process: Where does the process start and where does it end?

You can think of a case as the streaming object that is moving through the process. For example, the travel ticket in the picture above might go through the steps ‘Purchased’, ‘Printed’, ‘Scanned’ and ‘Validated’. If you want to look at the process flow of travel tickets, you would choose the travel ticket number as your case ID.

In the previous article we saw how you can change the focus from one case ID to another. For example, in a call center process you can look at the process from the perspective of a service request or from the perspective of a customer. Both are valid views and offer different perspectives on the same process.

Another option you should keep in mind is that, sometimes, you might also want to combine multiple columns into the case ID for your process mining analysis.

For example, if you look at the callcenter data snippet below then you can see that the same customer contacts the helpdesk about different products. So, while we want to analyze the process from a customer perspective, perhaps it would be good to distinguish those cases for the same customer?

Let’s look at the effect of this choice based on the example. First, we only use the ‘Customer ID’ as our case ID during the import step. As a result, we can see that all activities that relate to the same customer will be combined in the same case (‘Customer 3’).

If we now want to distinguish cases, where the same customer got support on different products, we can simply configure both the ‘Customer ID’ and the ‘Product’ column as case ID columns in Disco (you can see the case ID symbol in the header of both columns in the screenshot below):

The effect of this choice is that both fields’ values are concatenated (combined) in the case ID value. So, instead of one case ‘Customer 3’ we now get two cases: ‘Customer 3 – MacBook Pro’ and ‘Customer 3 – iPhone’ (see below).

There are many other situations, where combining two or more fields into the case ID can be necessary. For example, imagine that you are analyzing the processing of the tax returns at the tax office. Each citizen is identified by a unique social security number. This could be the case ID for your process, but if you have data from multiple years then you also need the year to separate the returns from the same citizen across the years.

To create a unique case identifier, you can simply configure all the columns that should be included in the case ID as a ‘Case’ column like shown above, and Disco will automatically concatenate them for the case ID.

As before, there is not one right and one wrong answer about how you should configure your data import but it depends on how you want to look at your process and which questions you want to answer. Often, you will end up creating multiple views and all of them are needed to get the full picture.

There are no comments for this article yet. Add yours!
When Incomplete Cases Shouldn’t Be Removed

This is the fourth and last article in our series on how to deal with incomplete cases in process mining. You can find an overview of all articles in the series here.

There are also situations in which you should not remove incomplete cases from your data set. Here are two examples:

Finally, do not forget to assess the representativeness of your data set after you have removed your incomplete cases. For example, if it appears that 80% of your cases are incomplete then it would be very dangerous to base your process analysis on the remaining 20%!

If you do not have enough completed cases in your data set, you may need to go back and request a larger data sample from a longer time period to be able to get representative results.

There are no comments for this article yet. Add yours!
The Different Meanings of “Finished”

This is the third article in our series on how to deal with incomplete cases in process mining. You can find an overview of all articles in the series here.

Once you have determined what your startpoints and what your endpoints are, you still need to think about what “finished” or “completed” actually means for your process.

Multiple interpretations are possible and the differences can be subtle, but you will need to use different filters depending on the meaning that you want to apply. The results will be different and you need to be clear about which meaning is right for your data set.

Here are four examples for how you can filter incomplete cases. It’s not that any of these are better or more appropriate than others in general. Instead, it depends on your process and on the meaning of “finished” that you want to choose.

Ended In

Perhaps the most common meaning of “finished” is to look at which activities have occurred as the very last activity (for end points) or as the very first activity (for start points) in a case.

This corresponds to the dashed lines that you see in the process map and you can use the Endpoints Filter in Discard cases mode to filter all cases that start or end with a particular set of activities (see Figure 1).

Figure 1: Use the use the Endpoints Filter in Discard cases mode to filter all cases that start or end with a particular set of activities.

When you add this filter, only the activities that occurred as the very first event in any of the cases are shown in the ‘Start event values’ on the left and only activities that occurred as the very last event in any of the cases are shown in the ‘End event values’ on the right.

You can then select only the regular start and end activities that you have identified in the previous step to focus on your completed cases. For example, if we only select the ‘Order completed’ activity as a regular end point for our refund process, then the remaining data set will only contain the 333 cases that actually ended with ‘Order completed’. If you use the shortcut ‘Filter for this start/end activity’ after clicking on a dashed line in the process map, Disco will automatically add a pre-configured Endpoints filter to your data set.

To use your filtered data set as the new reference point for your further analysis, you can enable the checkbox ‘Apply filters permanently’ after pressing the ‘Copy and filter’ button. The outcome of applying the filter will be the same (the same 333 cases remain), but the applied filter will be consolidated in a new data set, so that successive analyses use this new baseline as the new 100% of cases.

Reached Milestone

Sometimes, the very last activity that happened in a case is not the best way to determine whether a case has been completed or not.

For example, after completing an order there might be back-end activities such as archiving or other documentation steps that occur later. In these cases, ‘Order completed’ will not be the very last step in the process (so, the case would not be picked up if you use the Endpoints filter).

Figure 2: Use the Attribute Filter in Mandatory mode to filter cases that have passed a certain milestone in the process.

If you are mainly concerned that one or more milestone activities that indicate the completion of your process have occurred or not, you can use the Attribute Filter in Mandatory mode (see Figure 2). This way, you determine all cases where any of the selected activities has happened, but you don’t care whether they were the very last step in the process or whether other activities were recorded afterwards.

Instead of manually adding this filter, you can also use the shortcut Filter this activity… after clicking on the activity in the process map. Disco will automatically add a pre-configured Attribute Filter in Mandatory mode to your data set with the right activity already selected.

If we apply this meaning of “finished” based on the milestone activity ‘Order completed’ for the refund process, we get a slightly different outcome compared to the Endpoints Filter before. Instead of 333 cases, there now remain 334 cases after applying the filter and we can see that the additional case ended with the activity ‘Warehouse’ (see Figure 3).

Figure 3: One additional case remains after changing the meaning of the finished cases from the Endend In to the Reached Milestone semantics.

If we now click on this dashed line leading from the ‘Warehouse’ activity and use the short-cut to investigate this case in more detail, we can see in the history of the case that the activity ‘Order completed’ did indeed occur. However, it occurred in the middle of the process after the order was initially rejected. Then, the case got picked up again and the refund was actually granted (see Figure 4).

Figure 4: The additional case did perform the step ‘Order completed’, but ‘Order completed’ was not the very last step in the process.

Cut Off

In another scenario, you might be analyzing the refund process from a customer perspective: This is a process that the customers of an electronics manufacturer go through after the product that they purchased was broken and they now want to get their money back. So, from the customer’s point of view the process is “finished” as soon as they have received their refund.

To analyze the data from this perspective, we can focus on the three payment activities ‘Payment issued’, ‘Refund issued’ and ‘Special Refund issued’ (see Figure 5).

Figure 5: From the customer’s perspective the process is finished as soon as one of the payment activities has occurred.

If we search for these activities in the process map, then we can see that there are several activities that happen afterwards. Sometimes, the delays in the back-end processing can be quite long (for example, 7.5 days on average after the ‘Payment issued’ step), but from the customer’s perspective this delay is not relevant.

So, to focus our analysis on the part of the process that is relevant for the customer, we can use the Endpoints Filter in Trim longest mode (see Figure 6).

Figure 6: Use the the Endpoints Filter in Trim longest mode to focus on a segment of the process.

When we change the Endpoints Filter mode from Discard cases to Trim longest, then all of the activities become available as ‘Start event values’ on the left and as ‘End event values’ on the right. We can now select only the three payment activities as the customer endpoints in our process.

As a result, everything that happened after any of these three payment activities is cut off. We can see that the customer payments now appear as the endpoints in our process map (see Figure 7).

Figure 7: We have created three new endpoints for the process segment that we want to focus on.

The cases that remain in the data set after applying the filter are the same ones as if we would have used the Attribute filter in ‘Mandatory’ mode. But cutting off all activities after the payments enables us to focus our process analysis on the part of the process that is relevant from the customer’s perspective:

Open for longer than X

There might be activities in your process that can be considered an endpoint if there has been a certain period of inactivity afterwards (see also Reason No. 3 at the beginning of this series). For example, we can request missing information (like the purchase receipt) from a customer to handle their refund order but the customer might not get back to us.

If we want to focus on cases where the activity ‘Missing documents requested’ was the last step in the process but nothing has happend for a month, we can use a combination of filters in the following way.

First, we add an Endpoints filter as shown in Figure 8.

Figure 8: To filter out cases that have been open for a certain time, we first add an Endpoints Filter.

Then, we add a second filter by clicking the ‘click to add filter…’ button again and we add a Timeframe filter on top of it (see Figure 9).

Figure 9: Then, we add a Timeframe filter that focuses on cases that have had a certain period of inactivity since the last step.

By adapting the selected timeframe in such a way that the past month is not covered, we will only keep those cases that did end with ‘Missing documents requested’ and where that last step took place more than one month ago.

There are no comments for this article yet. Add yours!
Older posts »