You are reading Flux Capacitor, the company weblog of Fluxicon.
Here, we write about process intelligence, development, design, and everything that scratches our itch. Hope you like it!

Regular Updates? Use this RSS feed, or subscribe to get emails here.

You should follow us on Twitter here.

Take A Deep Dive At This Year’s Process Mining Camp 1

Get your ticket for Process Mining Camp 2015 now! width=

Only 2 weeks to go to Process Mining Camp on 15 June! If you have not secured your ticket yet, you can do so on the Process Mining Camp Website.

Deep-Dive Talks

In the past years, our camp practice talks were mostly between 15 and 25 minutes, which is often enough to get a point across but does not leave much room to go deeper into the topic. The feedback that we received from campers was that sometimes they wanted to see in more detail, and more hands-on, how the speakers achieved what they were talking about.

Therefore, this year we introduce deep-dive talks for the first time. In a deep-dive talk the speaker gets up to 45 or 60 minutes time to really show things in more detail. We will also have a number of shorter talks focusing on a specific process mining aspect. This should give us a great mix with both a wide breadth of topics, and also the necessary depth.

Today, we are happy to announce the first two speakers. It is a very special pleasure to welcome back two process mining pioneers who have been with us in the very first Process Mining Camp in 2012: Léonard Studer from the City of Lausanne and Mieke Jans, back then with Deloitte, and today assistant professor at Hasselt University. They will both deliver deep-dive talks that give you practical advice for your own process mining projects.

Léonard Studer, City of Lausanne, Switzerland

Léonard Studer

What it means to study a too lengthy administrative process

Administrative processes are typically based on public laws and regulations. As such, you might think that they must be quite simple and well-structured, especially when compared to customer journey or hospital processes. The truth, though, is that administrative processes can become very complicated as well.1

Léonard and his colleague Ines analyzed the construction permit process at the City of Lausanne, which is regulated by 27 different laws from Swiss federal law, cantonal law, and communal regulation. It takes an average of six months to obtain a construction permit in Lausanne, from when the demand is filed. The administrative and technical employees already handle a heavy workload, while external clients like architects and construction businesses have put pressure on the public works department to speed up the process.

The objective of the study was to identify bottlenecks and inefficiencies in the process, of course without changing or removing any of the legally required steps. Léonard will take us on a journey through the project, with all its challenges, highlights, and findings. One of the problems was that there was no proper activity name and Léonard will show hands-on how he used text mining to pre-process the data.

Mieke Jans, Hasselt University, Belgium

Mieke Jans

Step-By-Step: From Data to Event Log

People often ask us about how to extract process mining data from ERP systems like SAP and Oracle. Our typical recommendation is that you work with pre-existing templates from one of our technology partners specialized on those systems. But what if you want to extract the data yourself?

Apart from some quite old and dusty Master’s theses, there is very little information available. Everyone needs to start from square one, basically re-inventing the wheel over and over. The challenge in extracting process mining data lies not only in finding the right data in the thousands of business tables, but there is a whole range of other questions2 that have a direct impact on the suitability and quality of the data that you get.

Mieke created her first event log from a relational database already eight years ago as part of her PhD research, and she has elaborated her experience later in industry. By now, she has created dozens of event logs from different relational databases (from standard ERP systems to custom-made legacy systems). In this talk, Mieke will walk you through a step-by-step approach for extracting a good event log from any relational database, which has never been published before. Based on illustrations of each step you will learn about the implications of your decisions, and you will get a unique head start for the next time you need to extract process mining data from a database. You will also get a handout with a checklist to take home, so that you don’t have to take too many notes.

Stay Tuned For More

We are just getting started, stay tuned for further updates on the speaker line-up of this year’s process mining camp! Sign up to receive notifications about camp updates here.


  1. Have you looked at the building permit processes from the four municipalities of this year’s BPI Challenge? They are insanely complex!  
  2. For example, earlier we have written about the challenge of many-to-many relationships here — but there are many more.  
There are 1 comments for this article.
Meet The Process Mining Family

Process Mining Camp 2015 -- Get your ticket now!

There are less than 3 weeks until we meet at Process Mining Camp on 15 June. If you have not secured your ticket yet, you can still get one on the Process Mining Camp Website. Don’t wait too long, they are going fast.

We are working hard to finalize the program, and we will start announcing the first speakers here on this blog very soon. Stay tuned!

The Hallway Track

Today, we want to spend some time on the social part of this year’s camp. If you have already registered, you were asked during registration whether you plan to attend the process mining party in the evening, and the breakfast in the morning after camp. Both the party and breakfast are new to this year’s camp. So, why did we add them and what can you expect?

Process Mining Camp is the annual meetup of process mining practitioners, where you can—once a year—meet the community of likeminded folks who are just as enthusiastic about process mining as you are. For many of you, starting to introduce process mining in your company is a new avenue, with many questions coming up along the way. The best way to fast-track your progress, and to gain more confidence in your approach, is to exchange experiences with others who are going through that same process. And if you are an experienced practitioner of many years, you don’t normally know that many people who are on the same level as you are, people who you can still learn from.

Of course the Process Mining Camp program itself is designed to give you, newcomers and experts alike, lots of inspiration and tips to think about and use for your own practice. At the same time, the most valuable insights often come from hallway-track conversations with your peers, where you can share your story and get direct feedback as well. So, the actual chance to talk to other campers face to face is just as important as the program itself.

The thing is that, even if you have lots of breaks during the main conference program, everyone first needs to get up from their seat, find their way to the coffee corner, and settle in — and once you have found someone interesting to talk to… time is up!

That is why we have thought about how we can create more room for meaningful discussions between campers, besides the conversations that you will have in the breaks of the program.

Process Mining Camp Party

Process Mining Party

After the Process Mining Camp conference program is over, people will have different needs: Some of you will want to have a quick snack and get some rest (or catch up on email) in their hotel room. Others will want to join a small group over dinner in town. We can help you form groups and give tips for where to go.

After dinner, we will meet up again in a nice bar, in the city center of Eindhoven. You can expect a relaxed atmosphere, great music, and the chance to get to know each other over a refreshing beer, wine, or soft drink. We will have a special friend as a DJ over from Berlin, and we are really looking forward to a great evening rounding off the camp day together. Even if you are living in the area and do not plan to stay overnight, you should definitely come!

Process Mining Camp Breakfast

Grand Cafe Usine

For those of you who stay overnight, there will be a goodbye breakfast in a nice, spacious café (built in a former Philips factory) in the city center on Tuesday 16 June in the morning. Have as much breakfast, coffee and tea as you want, catch up with that one person who you did not get a chance to talk to yet, and get on your way home fresh and awake.

If you are participating in the co-located 2-day process mining training on 16/17 June: We will start a little later on day 1 and leave from the breakfast together.

The Timetable

So, here is a roundup of the timetable for your travel planning. We can’t wait to see you all on 15 June!

Monday 15 June

09:00 Registration and Coffee
10:00 Camp opening
18:00 Camp closing
21:00 Process Mining Party

Tuesday 16 June

09:00 — 12:00 Process Mining Breakfast

There are no comments for this article yet. Add yours!
Watch Recording of Webinar on Process Mining for Customer Journeys 1

One really interesting application area for process mining is customer journey analysis. In customer journey mining, process mining is used to understand how customers are experiencing the interaction with a company, and how they are using their products. For example, you may want to see how your customers are navigating your website or are interacting with your web application.

We recently recorded a webinar on that topic, together with our friends at UXsuite. You can watch the recording of the customer journey process mining webinar by clicking on the video above.

If you analyze processes from a customer perspective (often across multiple channels such as phone, web, in-person appointments, etc.), you typically face a lot of diversity and complexity in the process, but also issues of data handling and automation.

Some of the typical customer journey analysis challenges are:

Watch the webinar recording to learn more about these challenges, and to see how they can be addressed through an iteration of data preparation and process analysis steps.

What you will learn:

If you have seen a process mining introduction before, you can jump directly to the customer journey examples and challenges part here.

Are you thinking about analyzing customer journeys for your company now or in the future? Request to receive the slides from the video (including step-by-step screenshots from the live demo) and information on how to get started immediately with UXsuite and Disco at http://uxsuite.com/airlift-integration/. We also offer evaluation packages that make it easy for you to try out customer journey analysis with UXsuite and Disco for your own process. Get in touch!

There are 1 comments for this article.
Process Mining Camp 2015 — Get Your Ticket! 4

Register for Process Mining Camp 2015!

This year’s Process Mining Camp1 takes place on Monday, 15 June, in Eindhoven, the Netherlands2. Today, we are opening registration and you can get your ticket now on the camp website.

Process Mining Camp is the only process mining conference for practitioners world-wide. Every year, process mining enthusiasts from all over the globe come to camp to hear enlightening talks, and to exchange experiences and make friends with other process miners from the community. Last year’s campers came from 16 different countries!

This year, we will focus more than ever on the core that Process Mining Camp is all about: Discussing experiences, lessons learned, successes, and also the challenges around applying process mining in practice. In our practice talks, practitioners will take the stage and lead us through their experiences. The goal is not to impress but to share and discuss with the community.

This is what you can look forward to at Process Mining Camp:

We are still working with the speakers to finalize the program, but you should already register to secure your seat. We believe that being at camp in person, and especially interacting with other campers, is essential to the camp experience and atmosphere, so there will be no live stream on the internet this year.

Don’t miss Process Mining Camp 2015: Whether you are a beginner, or if you have been working with process mining for many years, you will go home with lots of relevant insights for your own work.

You can also sign up for the Process Mining Camp mailing list to be notified about further updates. For any questions, just get in touch.

We are very excited about this, and we can’t wait to see you all at camp on 15 June!

Anne & Christian

Anne and Christian at Process Mining Camp 2014


  1. See previous editions of Process Mining Camp from 2012, 2013, and 2014 
  2. Eindhoven is about 90 minutes by train from Amsterdam’s Schiphol airport 
There are 4 comments for this article.
BPI Challenge 2015 1

Join the BPI Challenge!

The BPI Challenge is an annual process mining competition, which takes place for the fifth time this year. The goal of the challenge is to give both practitioners and researchers the opportunity to do a process mining analysis on real-life data.1

In this competition, anonymized but real data is provided and can be analyzed by anyone using any tools. Submissions can be handed in until June 28, 2015 and the winner will receive a very special BPI Challenge trophy. Read more about the additional student prizes at the BPI Challenge website.

As always, we make our process mining software Disco available for anyone for the purpose of this challenge. Read on to see what this year’s challenge is about and how you can get started.

The Process

This year’s data is provided by five Dutch municipalities. The data contains all building permit applications over a period of approximately four years. There are many different activities present, denoted by both codes (attribute concept:name) and labels, both in Dutch (attribute taskNameNL) and in English (attribute taskNameEN).

The cases in the log contain information on the main application as well as objection procedures in various stages. Furthermore, information is available about the resource that carried out the task and on the cost of the application (attribute SUMleges).

The processes in the five municipalities should be identical, but may differ slightly. Especially when changes are made to procedures, rules or regulations the time at which these changes are pushed into the five municipalities may differ. Of course, over the four year period, the underlying processes have changed.

The municipalities have a number of questions, outlined below:

  1. What are the roles of the people involved in the various stages of the process and how do these roles differ across municipalities?
  2. What are the possible points for improvement on the organizational structure for each of the municipalities?
  3. The employees of two of the five municipalities have physically moved into the same location recently. Did this lead to a change in the processes and if so, what is different?
  4. Some of the procedures will be outsourced from 2018, i.e. they will be removed from the process and the applicant needs to have these activities performed by an external party before submitting the application. What will be the effect of this on the organizational structures in the five municipalities?
  5. Where are differences in throughput times between the municipalities and how can these be explained?
  6. What are the differences in control flow between the municipalities?

The Data Set

There are five different log files available. Events are labeled with both a code and a Dutch and English label. Each activity code consists of three parts: two digits, a variable number of characters, and then three digits. The first two digits as well as the characters indicate the subprocess the activity belongs to. For instance ‘01_HOOFD_xxx’ indicates the main process and ‘01_BB_xxx’ indicates the ‘objections and complaints’ (‘Beroep en Bezwaar’ in Dutch) subprocess. The last three digits hint on the order in which activities are executed, where the first digit often indicates a phase within a process.

Each trace and each event, contain several data attributes that can be used for various checks and predictions. Furthermore, some employees may have performed tasks for different municipalities, i.e. if the employee number is the same, it is safe to assume the same person is being identified.

Further information about the challenge and how to submit can be found at http://www.win.tue.nl/bpi/2015/challenge.

Join the Challenge!

We have imported these five files for you in a Disco project file that you can simply open with the freely available demo version of Disco. The only difference you will find in this project file compared to directly importing the XES files is that we used the English activity names and sorted same-timestamp events based on the action code attribute.

You can download both the Disco project file and the raw data files here:

Download the Disco project file that can be opened with the demo version of Disco Download the Disco project file that can be opened with the freely available demo version of Disco

BPI-2015.dsc

Download the raw data files and the data model in a Zip file Download the raw data files in a Zip file (CSV files, created from the XES files provided in the challenge)

BPIC-2015.zip



Submissions can be made through the EasyChair system.

A submission should contain a pdf report of at most 30 pages, including figures, using the LNCS/LNBIP format specified by Springer (available both as a Word and as LaTeX template). Appendices may be included, but should only support the main text.

Submission deadline: June 28, 2015

Announcement of winners: at the 11th Workshop on Business Process Intelligence (BPI 15), Innsbruck, Austria, 31st August 2015


  1. If you are looking for even more data sets, take a look at the challenges from 2011, 2012, 2013, and 2014, too.  
There are 1 comments for this article.
Process Mining + Process Modeling Combined

Anne presenting process mining in Cologne

This week we were speaking at a process practitioner conference in Germany. As usually, process mining is a topic that makes people enthusiastic very quickly.

At the same time, if you are mostly working in process modeling initiatives, you may—as people did at the conference in Germany—ask about the possibility to bring your process mining results into a process modeling environment. The advantages of such environments are that you can change the models, add additional information about process goals, manual steps, and customize the presentation to match your corporate style.

Last Friday, we were invited on a webinar by the BPM in Practice initiative co-organizer Jürgen Pitschke from BCS. The topic was to combine process mining with process modeling, and we were showcasing the new integration between Disco and the Trisotech digital enterprise suite. Denis Gagné, Trisotech’s founder, gave a demo of how Disco process maps can be imported into their BPMN modeler, and how they can be used as an accelerator to quick-start the process discovery and ‘to-be’ design initiatives.

Here is the webinar recording:

If you have seen me give a process mining demo before, you might want to jump to Denis’ demo part at 38m into the webinar here.

Disco users who are looking to import mined process maps into a dedicated modeling tool to leverage their process mining results in a corporate documentation environment will be excited about this integration. Try it out for yourself using the demo versions available from the Disco and Trisotech websites.

There are no comments for this article yet. Add yours!
Dealing with Many-to-Many Relationships in Data Extraction for Process Mining 4

Tracks converge and diverge, and so do data objects for process mining

(This article previously appeared in the Process Mining News – Sign up now to receive regular articles about the practical application of process mining.)

It can be really easy to extract data for process mining. Some systems allow you extract the process history in such a way that you can directly import and analyze the file without any changes. However, sometimes it is not so easy and some preparation work is needed to get the data ready for analysis.

One typical problem in ERP systems is that the data is organized in business objects rather than processes. In this case you need to piece these business objects (for example document types in SAP) together before you can start mining.

The first challenge is that a common case ID must be created for the end-to-end process to be able to analyze the complete process with process mining. For example the process may consist of the following phases:

  1. Sales order: traced by Sales order ID
  2. Delivery: traced by Delivery ID
  3. Invoicing: traced by Invoicing ID

To be able to analyze the complete process, all three phases must be correlated for the same case in one case ID column. For example, if a foreign key with the Sales order ID reference exists in the delivery and invoice phase, these references can be used for correlation and the case ID of the Sales order can be used as the overall case ID for the complete process.

A second—and somewhat trickier—challenge is that often there is not a clear one-to-one relationship between the sub case IDs. Instead, you may encounter so-called many-to-many relationships. In many-to-many relationships each object can be related to multiple objects of the other type. For example, a book can be written by multiple authors, but an author can also write multiple books.

Imagine the following situation: A sales order can be split into multiple deliveries (see illustration below on the left). To construct the event log from the perspective of the sales order, in this case both deliveries should be associated with the same case ID (see middle). The resulting process map after process mining is shown on the right.

Going down the chain, a delivery can also be split-invoiced etc. The same principle applies.

Flattening the process data from the sales order perspective with two deliveries (click to enlarge)

Conversely, it may also be the case that a delivery can combine multiple sales orders (see illustration below on the left).

In this case, again, to construct the event log from the perspective of the sales order, the combined delivery should be duplicated to reflect the right process for each case (see middle). As a result, the complete process is shown for each sales order and, for example, performance measurements between the different steps can be made (no performance measurements can be made in process mining between different cases).

The resulting process map is shown on the right.

Flattening the process data from the sales order perspective with a combined delivery for two sales orders (click to enlarge)

To illustrate what would happen when the delivery is only associated to the first sales order, consider the example below.

It looks as if there was no delivery for sales order 2, which is not the case.

In return, one needs to be aware that the number of deliveries in the above mapping may be higher than the actual number of deliveries that took place. There were no two deliveries, just one!

Wrong: Flattening the process data from the sales order perspective with associating the delivery to just one of the two sales orders (click to enlarge)

The point is that there is no way around this. Wil van der Aalst sometimes calls this “flattening reality” (like putting a 3D-world in a 2D-picture). You need to choose which perspective you want to take on your process.

What you can take away is the following:

What other challenges have you encountered when creating event logs from relational databases? Let us know in the comments!

There are 4 comments for this article.
Generate Your Own Event Log From Oracle E-Business Suite

Know your tools!

This is a guest post by Marcel Koolwijk (see further information about the author at the bottom of the page).

If you have a process mining article or case study that you would like to share as well, please contact us at anne@fluxicon.com.

Generate your own event log

To be able to do process mining you need to have some data. Data can come from many sources and some sources are better structured for generating event logs than others. ERP applications in general, and the Oracle E-Business Suite in particular, are great sources for event logs. But ERP applications do not generate event logs automatically in a way that you can use for process mining. So there is some work to be done. The challenge is to translate the existing data from the table structure of the ERP application into an event log that can be used for process mining. Because of the complexity of the table structure you will need to have in-depth knowledge about the ERP application to make this translation, and to create an extraction program that generates the event log.

However, as a first step — before you start the (often time-consuming) work of writing functional designs, technical designs, and getting your IT department involved — you typically just want to get some data from your ERP application to try out process mining for your own processes and get some hands on experience.

This article gives you an example with step-by-step instructions for how you can quickly get some first data from your own Oracle E-Business Suite to get started.

Oracle EBS version and the tools you need

You can use the description below for an Oracle E-Business Suite Release 12.1 installation but it probably also works fine (although not tested) for any other release of the Oracle E-Business Suite.

For generating the event log it is easiest if you have SQL query access to the database (just query access is sufficient for now). If you do not have query access to the database then there are other options as well, but for the description below I assume you do have SQL query access. I use SQL Developer from Oracle as SQL query tool, but any other SQL tool should work in a similar way.

Other than the SQL query access to the database, there is no installation or setup required in the Oracle E-Business Suite in order to generate the event log.

Step-by-step Instructions

The process that we are looking at is the requisition process in Oracle iProcurement. We will extract data from the approval process for the last 1000 requisitions.

As case ID we use the internal requisition header ID. The activity is the name of the activity that Oracle stores in the table. We use the date the action is performed as the time stamp and as resource we use the employee ID. For now, we just add the org ID and the requisition number as additional attributes, but any further attribute can be added rather easily.

Here are the step-by-step instructions to create your first event log from your Oracle E-Business Suite:

Step 1

Logon to the database with you query account in Oracle SQL Developer

Step 2

Run the query below:

SELECT PRH.REQUISITION_HEADER_ID AS CASE_ID ,
'Requisition '||FLV.MEANING AS ACTIVITY_NAME ,
TO_CHAR(PAH.ACTION_DATE,'DD-MM-YYYY HH24:MI:SS') AS TIME_STAMP ,
PAH.EMPLOYEE_ID AS RESOURCE_ID ,
PRH.ORG_ID AS ORG_ID,
PRH.SEGMENT1 AS REQUISITION_NUMBER
FROM PO.PO_REQUISITION_HEADERS_ALL PRH
INNER JOIN PO.PO_ACTION_HISTORY PAH
ON PAH.OBJECT_ID = PRH.REQUISITION_HEADER_ID
INNER JOIN FND_LOOKUP_VALUES FLV
ON FLV.LOOKUP_CODE = PAH.ACTION_CODE
AND FLV.LOOKUP_TYPE = 'APPR_HIST_ACTIONS'
AND PAH.OBJECT_TYPE_CODE = 'REQUISITION'
AND PAH.OBJECT_SUB_TYPE_CODE = 'PURCHASE'
AND FLV.LANGUAGE = 'US'
WHERE PRH.REQUISITION_HEADER_ID > (SELECT MAX(REQUISITION_HEADER_ID) FROM PO_REQUISITION_HEADERS_ALL)-1000;

The result of the query will be shown in Oracle SQL Developer:

SQL Developer result after running the query (click to enlarge)

Step 3

Export the result of the query as CSV file to your local drive.

Export Query result as a CSV file (click to enlarge)

Step 4

Start Disco and open the CSV file. Configure the columns in the following way and press “Start Import”.

Configure the case ID, activity and timestamp during import (click to enlarge)

Step 5

Start process mining!

Start process mining! (click to enlarge)


Marcel Koolwijk

Marcel Koolwijk is specialized in the implementation of the logistic Oracle E-Business Suite modules. The in-depth functional knowledge gained since 1997 with successful projects in a wide range of customers can be used for your implementation.

More information is available at www.oracle-consultant.nl


There are no comments for this article yet. Add yours!
Managing Complexity in Process Mining Part IV: Leaving Out Details 1

Leaving out details allows you to take a step back and obtain a bird's eye view on your process

This is the fourth part in a series about managing complexity in process mining. We recommend to read Part I, Part II and Part III first if you have not seen them yet.

Part IV: Leaving Out Details

The last category of simplification strategies is about leaving out details to make the process map simpler. Leaving out details often allows you to take a step back and obtain a bird’s eye view on your process that you would not be able to take if you kept “on the ground” with all the details in plain sight.

Strategy 8) Removing “Spider” Activities

One way to leave out details is to look out for what we call “Spider” activities. A spider activity is a step in the process that can be performed at any point in time in the process.

If you would take a look at the original service refund process data, you would notice activities such as ‘Send email’ and a few comment activities that are showing up in central places of the process map, because they are connected to many other activities in the process (see below).

So-called "spider activities" seem very central to the process while they are often the least important (click to enlarge)

The thing is that — although these activities are showing up in such a central (“spider”) position — they are actually often among the least important activities in the whole process. Their position in the process flow is not important, because emails can be sent and comments can be added by the service employee at any point in the process.

Because these activities sometimes happen at the beginning, sometimes at the end, and sometimes in the middle of the process, they have many arrows pointing to them and from them, which unnecessarily complicates the process map.

In fact, if we increase the level of detail by pulling up the Paths slider, the picture gets even worse (see below).

"Spider activities" have lots of arrows pointing to them and pointing away from them (click to enlarge)

You can easily remove such spider events by adding an Attribute filter and deselecting them (see below). In the standard Keep selected mode this filter will only remove the deselected events but keep all cases.

By adding an Attribute filter you can simply deselect the activities you want to remove (click to enlarge)

The result is a much simpler process map, without these distracting “spider” activities (see below). So, the next time you are facing a spaghetti process yourself, watch out for such unimportant activities that merely complicate your process map without adding anything to your process analysis.

The process map without "Spider activities" is much simpler (click to enlarge)

Strategy 9) Focusing on Milestone Activities

Finally, the last strategy is the reverse of the “spider” activity strategy before: Instead of starting from the complete set of events in your data set and looking at where you might leave some out, take a critical look at the different types of events in your data and ask yourself which activities you want to focus on.

Just because all these different events are contained in your data set does not mean that they are all equally important. Often the activities that you get in your log are on different levels of abstraction. Especially when you have a large number of different activities, it can make sense to start by focusing on just a handful of these activities — the most important milestone activities — initially.

For example, in the anonymized data sample below you see a case with many events and detailed activities such as ‘Load/Save’ and ‘Condition received’. But there are also some other activities that look different (for example, ‘WM_CONV_REWORK’), which are workflow status changes in the process.

If you have many different activities, start by focusing on a few that show some milestone activities that have been passed (click to enlarge)

It makes a lot of sense to start by filtering only these ‘WM_’ activities to get started with the analysis and then to bring back more of the detailed steps in between where needed.

In Disco, you can use the Attribute filter in Keep selected mode as before, but you would deselect all values first and then select just the ones you want to keep (see below).

Milestones can also be selected with the Attribute filter: Simply start by deselecting everything and choose the ones that you want to keep (click to enlarge)

As a result, a complex process map with many different activities …

Process map before focusing on milestones (click to enlarge)

… can quickly be simplified to showing the process flow for the selected milestone activities for all cases (and simplifying the variants along the way).

Process map after focusing on milestones in the process (click to enlarge)

If you have no idea what the best milestone activities in your process are, you should sit together with a process or data expert and walk through some example cases with them. They might not know the meaning of every single status change, but with their domain knowledge they are typically able to quickly pick out the milestone events that you need to get started.

It can also be a good idea to start the other way around: Ask your domain expert to draw up the process with only the most important 5 or 7 steps. This can be just on a piece of paper or a white board and will show you what they see as the milestone activities in their process from a business perspective. Then go back to your data and see to which extent you can find events that get close to these milestones.

Focusing on milestone activities is a great way to bridge the gap between business and IT and can help you to get started quickly also for very complex processes and extensive data sets.

We hope this series was useful and you could pick up a trick or two. Let us know which other methods you have used to simplify your “spaghetti” maps!

There are 1 comments for this article.
Webinar on Process Mining for Customer Journeys

Webinar on Process Mining for Customer Journeys

Process mining can not only be used to analyze internal business processes, but also to understand how customers are experiencing the interaction with a company, and how they are using their products, for example, how they are navigating a website. This perspective is often called customer journey.

If you analyze processes from a customer perspective (often across multiple channels such as phone, web, in-person appointments, etc.), you typically face a lot of diversity and complexity in the process.

On Thursday 2 April, 18:00 CET, we will hold a webinar that discusses these challenges and shows how they can be addressed through an iteration of data preparation and process analysis steps.

Agenda

  1. Challenges in Process Mining for customer journeys
  2. Putting the analyst in charge through integration of data preparation and process analysis
  3. Live demo based on UXSuite (data collection and preparation) and Disco (process analysis)
  4. Q&A

Presenters

Anne Anne Rozinat is co-founder of Fluxicon and has more than 10 years of experience with applying process mining in practice. She will introduce the topic of process mining for customer journeys with its challenges and opportunities.

Mathias Mathias Funk, co-founder of UXSuite, is a specialist in collecting, managing, and analyzing data from websites and tangible electronics devices. He will give a live demo of how UXSuite can complement the process mining analysis in Disco for customer journeys.

Are you thinking about analyzing customer journeys for your company now or in the future? Make sure you sign up for the webinar here!

[Update: You can now watch a recording of the webinar here.]

There are no comments for this article yet. Add yours!
« Newer posts
Older posts »