You are reading Flux Capacitor, the company weblog of Fluxicon.
Here, we write about process intelligence, development, design, and everything that scratches our itch. Hope you like it!

Regular Updates? Use this RSS feed, or subscribe to get emails here.

You should follow us on Twitter here.

BPI Challenge 2015 — Winners and Submissions 4

Ube Wins BPI Challenge Award 2015

As a process miner, you need access to the process manager, or another subject matter expert, to ask questions, validate, and prioritize the analysis results that are coming up.

However, the very first step of any analysis is to explore the data and develop a first understanding of the process. Hypotheses are formed based on the questions that were defined together with the process owner in the scoping phase of the project.

This is exactly the step in a process mining project that the annual BPI Challenge allows you to practice:

Even after the BPI Challenge competition is over, you can still use the data sets to practice exactly that initial analysis step in a project — And to compare your approach with the other submissions.1

But of course participating in the actual competition is much more fun. And last week, the winners of this year’s BPI Challenge were announced.

The Winners!

First of all, Irene Teinemaa, Anna Leontjeva and Karl-Oskar Masing from the University of Tartu, Estonia, won the prize for the best student submission. One of the noteworthy aspects of their work was that they used a lot of different tools. They were awarded a certificate.

Winners of the BPI Challenge 2015 Student competition

In the overall competition, Ube van der Ham from Meijer & Van der Ham Management Consultants in the Netherlands won the BPI Challenge trophy.

Trophy 2015 BPI Challenge awarded to Ube van der Ham

The jury found that Ube brought many interesting insights to light that will help the municipalities in their process improvement and collaborations.

The Trophy

Like in the past two years, the trophy was developed after an original design by the artist Felix Günther. Hand-crafted from a single piece of wood, this “log” represents the log data to be mined. The shiny rectangle represents the gold that is mined from the data and this year has the shape of the famous roof of Innsbruck, where the award ceremony for the BPI Challenge took place.

BPI Challenge Trophy 2015 (artwork by Felix Günther)

The back of the trophy still features the bark of the tree, giving the whole piece a gorgeous feel and a heavy weight.

Back side of the BPI Challenge 2015 trophy

We thank Felix for this amazing work and know that Ube was very happy about not just receiving the BPI Challenge award but the trophy itself.

All Submissions

What is great about the BPI Challenge is that you can read the different reports of all participants and compare their approaches. This is a great way to learn more about process mining in practice.

Keep in mind that nobody of the participants had the chance to ask the actual process owners questions during their analysis. So, not every result or assumption that they make was correct. Also the winner, Ube van der Ham, warns that not all observations are necessarily correct, and one of the jury members who knows the process noted some misinterpretations. And inevitably they get stuck at points, where they can only hypothesize and not make a definite statement.

However, your role as a process mining analyst in a real project is to collect your assumptions and hypotheses and then validate them with the process experts in the following process mining sessions and workshops. And you can learn a lot be looking at how other people approached this data set.

Here are all the submissions:

  1. Ube van der Ham. Benchmarking of Five Dutch Municipalities with Process Mining Techniques Reveals Opportunities for Improvement
  2. Irene Teinemaa, Anna Leontjeva and Karl-Oskar Masing. BPIC 2015: Diagnostics of Building Permit Application Process in Dutch Municipalities
  3. Liese Blevi and Peter Van den Spiegel. Discovery and analysis of the Dutch permitting process
  4. Scott Buffett and Bruno Emond. Using Sequential Pattern Mining and Social Network Analysis to Identify Similarities, Differences and Evolving Behaviour in Event Logs
  5. Prabhakar M. Dixit, Bart F.A. Hompes, Niek Tax and Sebastiaan J. van Zelst. Handling of Building Permit Applications in The Netherlands: A Multi-Dimensional Analysis
  6. Niels Martin, Gert Janssenswillen, Toon Jouck, Marijke Swennen, Mehrnush Hosseinpour and Farahnaz Masoumigoudarzi. An Exploration and Analysis of The Building Permit Application Process in Five Dutch Municipalities
  7. Josef Martens and Paul Verheul. Social Performance Review of 5 Dutch Municipalities: Future Fit Cases for Outsourcing?
  8. Jan Suchy and Milan Suchy. Process Mining techniques in complex Administrative Processes
  9. Hyeong Seok Choi, Won Min Lee, Ye Ji Kim, Jung Hoon Lee, Chun Hoe Kim, Yu Lim Kang, Na Rae Jung, Seung Yun Kim, Eui Jin Jung and Na Hyeon Kim. Process Mining of Five Dutch Municipalities’ Building Permit Application Process: The Value Added in E-Government

If you have little time, I recommend to read the winning report by Ube and the work by Liese Blevi and Peter Van den Spiegel from KPMG – a close second place. Liese and Peter take a very careful and systematic approach in understanding the log data and the process that is behind it.


  1. Take also a look at the previous years, where you can find data sets from a hospital process (2011), a loan application process (2012), an IT Service Management process from Volvo IT (2013), and a data set from the Rabobank (2014).  
There are 4 comments for this article.
How To Deal With ‘Old Value / New Value’ Data Sets

Take a look at the following example. Instead of one Activity or Status column, you have two columns showing the “old” and the “new” status. For example, in line no. 2 the status is changed from ‘New’ to ‘Opened’ in the first step of case 1.

This is a pattern that you will encounter in some situations, for example, in some database histories or CRM audit trail tables.

The question is how to deal with log data in this format.

Solution 1

Should you use both the ‘Old value’ and the ‘New value’ column as the activity column and join them together?

This would be solution no. 1 and leads to the following process picture.

All combinations of old and new statuses are considered here. This makes sense but can lead to quite inflated process maps with many different activity nodes for all the combinations very quickly.

Solution 2

Normally, you would like to see the process map as a flow between the different status changes. So, what happens if you just choose the ‘Old value’ as the activity during importing your data set?

You would get the following process map.

The process map shows the process flow through the different status changes as expected, but there is one problem: You miss the very last status in every case (which is recorded in the ‘New value’ column).

For example, for case 2 the process flow goes from ‘Opened’ directly to the end point (omitting the ‘Aborted’ status it changed into in the last event).

Solution 3

You can do the same by importing just the ‘New value’ column as the activity column and get the following picture.

This way, you see all the different end points of the process. For example, some cases end with the status ‘Closed’ while others end as ‘Aborted’. But now you miss the very first status of each case (the ‘New’ status).

In this example, all cases change from ‘New’ to ‘Opened’. So, missing the ‘New’ in the beginning is less of a problem compared to missing the different end statuses. Therefore, solution 3 would be the preferred solution in this case. But in other situations, the opposite might be the case.

Filtering Based on Endpoints

Note that you can still use the values of the column that you did not use as the activity name to filter incomplete cases with the ‘Endpoints’ filter.

For example, if you used Solution 2 (see above) but wanted to remove all cases that ended in the ‘New value’ = ‘Aborted’ you can configure the desired end status based on the ‘New value’ attribute with the Endpoints filter as shown below:

In summary, what you can take away from this is the following:

In most situations, this is enough and you can use your ‘Old value / New value’ data just as it is. If, however, you really need to see the very first and the very last status in your process flow, then you would need to reformat your source data into the standard process mining format and add the missing start or end status as an extra row.

(This article previously appeared in the Process Mining News – Sign up now to receive regular articles about the practical application of process mining.)

There are no comments for this article yet. Add yours!
Process Mining Trainings in Autumn 2015

Disco!

Have you dived into process mining and just started to see the power of bringing the real processes to life based on data? You are enthusiastic about the possibilities and could already impress some colleagues by showing them a “living” process animation. Perhaps you even took the Process Mining MOOC and got some insights into the complex theory behind the process mining algorithms.

You probably realized that there is a lot more to it than you initially thought. After all, process mining is not just a pretty dashboard that you put up once, but it is a serious analysis technique that is so powerful precisely because it allows you to get insights into the things that you don’t know yet. It needs a process analyst to interpret the results and do something with it to get the full benefit. And like the data scientists say, 80% of the work is in preparing and cleaning the data.

So, how do you make the next step? What data quality issues should you pay attention to, and how do you structure your projects to make sure they are successful? How can you make the business case for using process mining on a day-to-day basis?

We are here to help you and have just opened our process mining training schedule for autumn 20151. In the past, we held 1-day trainings that gave a good starting point about the practical application of process mining but there was never enough time to practice. That is why earlier this year we started to give an extended 2-day course, which runs through a complete project in small-step exercises on the second day.

The feedback so far has been great. Here are two quotes from participants of the last 2-day training:

Practical, insightful, and at times amazing.

Very useful. In two days, if one already has a little background on Process Mining, you just become an expert, or at least this is how it feels.

The course is suitable for complete beginners, but if you have already some experience don’t be afraid that it will be boring for you. The introductory part will be quick and we will dive into practical topics and hands-on exercises right away.

The training groups are deliberately kept small and some seats have already been taken, so be quick to make sure you don’t miss your opportunity to become a real process mining expert!


  1. If the dates don’t fit or you prefer an on-site training at your company (also available in Dutch and German), contact Anne to learn more about our corporate training options.  
There are no comments for this article yet. Add yours!
Data Preparation for Process Mining — Part II: Timestamp Headaches and Cures 4

Did you know that 'Back to the future' contains an hommage to the classic 1928 silent comedy ‘Safety Last’?

This is a guest post by Nicholas Hartman (see further information about the author at the bottom of the page) and the article is part II of a series of posts highlighting lessons learned from conducting process mining projects within large organizations (read Part I here).

If you have a process mining article or case study that you would like to share as well, please contact us at anne@fluxicon.com.

Timestamps are core to any process mining effort. However, complex real-world datasets frequently present a range of challenges in analyzing and interpreting timestamp data. Sloppy system implementations often create a real mess for a data scientist looking to analyze timestamps within event logs. Fortunately, a few simple techniques can tackle most of the common challenges one will face when handling such datasets.

In this post I’ll discuss a few key points relating to timestamps and process mining datasets, including:

  1. Reading timestamps with code
  2. Useful time functions (time shifts and timestamp arithmetic)
  3. Understanding the meaning of timestamps in your dataset

Note that in this post all code samples will be in Python, although the concepts and similar functions will apply across just about any programming language, including various flavors of SQL.

Reading timestamps with code

As a data type, timestamps present two distinct challenges:

  1. The same data can appear in many different formats
  2. Concepts like time zones and daylight savings time mean that the same point in real time can be represented by entirely different numbers

To a computer time is a continuous series. Subdivisions of time like hours, weeks, months and years are formatted representations of time displayed for human users. Many computers base their understanding of time on so called Unix time, which is simply the number of seconds elapsed since the 1st of January 1970. To a computer using Unix time, the timestamp of 10:34:35pm UTC April 7, 2015 is 1428446075. While you will occasionally see timestamps recorded in Unix time, it’s more common for a more human-readable format to be used.

Converting from this human readable format back into something that computers understand is occasionally tricky. Applications like Disco are often quite good at identifying common timestamp formats and accurately ingesting the data. However, if you work with event logs you will soon come across a situation where you’ll need to ingest and/or combine timestamps containing unusual formats. Such situations may include:

The following scenario is typical of what a data scientist might find when attempting to complete process mining on a complex dataset. In this example we are assembling a process log by combining logs from multiple systems. One system resides in New York City and the other in Phoenix, Arizona. Both systems record event logs in the local time. Two sample timestamps appear as follows:

System in New York City: 10APR2015 23.12.17:54
System in Phoenix Arizona: 10APR2015 20.12.18:72

Such a situation presents a few headaches for a data scientist looking to use such timestamps. Particular issues of concern are:

You can see how this can all get quite complicated very quickly. In this example we may want to write a script that ingests both sets of logs and produces a combined event log for analysis (e.g., for import into Disco). Our primary challenge is to handle these timestamp entries.

Ideally all system admins would be good electronic citizens and run all their systems logging functions in UTC. Unfortunately, experience suggests that this is wishful thinking. However, with a bit of code it’s easy to quickly standardize this mess onto UTC and then move forward with any datetime analytics from a common and consistent reference point.

First we need to get the timestamps into a form recognized by our programming language. Most languages have some form of a ‘string to datetime’ function. Using such a function you provide a datetime string and format information to parse this string into its relevant datetime parts. In Python, one such function is strptime.

We start by using strptime to ingest these timestamp strings into a Python datetime format:

# WE IMPORT REQUIRED PYTHON MODULES (you may need to install these first)
import pytz
import datetime

# WE IMPUT THE RAW TEXT FROM EACH TIMESTAMP
ny_date_text="10APR2015 23.12.17:54"
az_date_text="10APR2015 20.12.26:72"

# WE CONVERT THE RAW TEXT INTO A NATIVE DATETIME
# e.g., %d = day number and %S = seconds
ny_date = datetime.datetime.strptime(ny_date_text, "%d%b%Y %H.%M.%S:%f")
az_date = datetime.datetime.strptime(az_date_text, "%d%b%Y %H.%M.%S:%f")

# WE CHECK THE OUTPUT, NOTE THAT FOR A NATIVE DATETIME NO TIMEZONE IS SPECIFIED
print(ny_date)
>>> 2015-04-10 23:12:17.540000

At this point we have the timestamp stored as a datetime value in Python; however, we still need to address the time zone issue. Currently our timestamps are stored as ‘native’ time, meaning that there is no time zone information stored. Next we will define a timezone for each timestamp and then convert them both to UTC:

# WE DEFINE THE TWO TIMZEONES FOR OUR DATATYPES
# NOTE: ‘ARIZONA’ TIMEZONE IS ESSENTIALLY MOUNTAIN TIME WITHOUT DAYLIGHT SAVINGS TIME
tz_eastern = pytz.timezone('US/Eastern')
tz_mountain = pytz.timezone('US/Arizona')

# WE CONVERT THE LOCAL TIMESTAMPS TO UTC
ny_date_utc = tz_eastern.localize(ny_date, is_dst=True).astimezone(pytz.utc)
az_date_utc = tz_mountain.localize(az_date, is_dst=False).astimezone(pytz.utc)

# WE PRINT CHECK THE OUTPUT, NOTE THAT THE TIMEZONE OF +0 IS ALSO NOW RECORDED
print(ny_date_utc)
>>> 2015-04-11 03:12:17.540000+00:00
print(az_date_utc)
>>> 2015-04-11 03:12:26.720000+00:00

Now we have both timestamps recorded in UTC. In this sample code we manually inputted the timestamps as text strings and then simply printed the results to a terminal screen. An example of a real-world application would be to leverage the functions above to read in raw data from a database for both logs, process the timestamps into UTC and then write the corrected log entries into a new table containing a combined event log. This combined log could then be subjected to further analytics.

Useful time functions

With timestamps successfully imported, there are several useful time functions that can be used to further analyze the data. Among the most useful are time arithmetic functions that can be used to measure the difference between two timestamps or add/subtract a defined period of time to a timestamp.

As an example, let’s find the time difference between the two timestamps imported above:

# WE COMPARE THE DIFFERENCE IN TIME BETWEEN THE TWO TIMESTAMPS
timeDiff = (az_date_utc - ny_date_utc)
print(timeDiff)
>>> 0:00:09.180000

The raw output here reads a time difference of 9 seconds and 18 milliseconds. Python can also represent this in rounded integer form for a specified time measurement. For example:

# WE OUTPUT THE ABOVE AS AN INTEGER IN SECONDS
print(timeDiff.seconds)
>>> 9

This shows us that the time difference between the two timestamps is 9 seconds. Such functions can be useful for quickly calculating the duration of events in an event log. For example, the total duration of a process could be quickly calculated by comparing the difference between the earliest and latest timestamp for a case within a dataset.

These date arithmetic functions can also be used to add or subtract defined periods of time to a timestamp. Such functions can be useful when manually adding events to an event log. For example, the event log may record the start time of an automated process, but not the end time. We may know that the step in question takes 147 seconds to complete (or this length may be recorded in a separate log). We can generate a timestamp for the end of the step by adding 147 seconds to the timestamp for the start of the step:

# WE ADD 147 SECONDS TO OUR TIMESTAMP AND THEN OUTPUT THE NEW RESULT
az_date_utc_end = az_date_utc + datetime.timedelta(seconds=147)
print(az_date_utc_end)
>>> 2015-04-11 03:14:53.720000+00:00

Understanding the meaning of timestamps in your dataset

Having the data cleaned up and ready for analysis is clearly important, but equally important is understanding what data you have and what it means. Particularly for data sets that have a global geographic scope, it is crucial to first determine how timestamps have been represented in the data. Relative to timestamps in your event logs some key questions you should be asking are:

Conclusion

While this piece was hardly an exhaustive look at programmatically handling timestamps, hopefully you’ve been able to see how some simple code is able to deal with the more common challenges faced by a data scientist working with timestamp data. By combining the concepts described above with a database it is possible to write an automated script to quickly ingest a range of complex event logs from different systems and output one standardized log in UTC. From there, the process mining opportunities are endless.


Nicholas Hartman

Nicholas Hartman is a data scientist and director at CKM Advisors in New York City. He was also a speaker at Process Mining Camp 2014 and his team won the BPI Challenge last year.

More information is available at www.ckmadvisors.com



  1. Note that in Disco you configure the timestamp pattern to fit the data (rather than having to provide the data in a specific format) and you can actually import merged data sets from different sources with different timestamp patterns: Just make sure they are in different columns, so that you can configure their formats independently.  
There are 4 comments for this article.
Get Actionable Information For Your Own Projects At Process Mining Camp

Register for Process Mining Camp 2015!

Today, we are happy to announce the last two practice talk speakers at this year’s Process Mining Camp! You can still get your ticket on the Process Mining Camp Website if you don’t have one yet.

With these last additions, we continue to add actionable, hands-on information that you can use for your own process mining projects. Rudi Niks, Management Consultant at O&i, must be one of the most knowledgable process mining practitioners out there. Since 2012 he has been doing process mining projects all year round, and he will share the Dos and Don’ts that will help you to set up your projects in the right way from the start. Anne Rozinat, co-founder of Fluxicon, will close the practice talks by showing you how you can actually measure your processes the right way.

Rudi Niks, O&i

Rudi Niks

From Insight To Sustainable Business Value

Rudi has worked on many different process mining projects over a range of industries in the past years. Based on that experience, he has learned that there are two main ingredients to achieve continuous improvement with process mining: First of all, you need the right understanding of the process and the performance of the process. Secondly, you need to actually make the necessary changes to benefit from these insights. But how exactly do you go about doing that? Which warning signs should you look out for in your own organization?

At camp, Rudi will give examples of what he has seen go right, and wrong. You can expect concrete advice and action points that will help you to make your process mining projects as successful as they can be.

Anne Rozinat, Fluxicon

Anne Rozinat

Process Mining Metrics

Performance measurements are part of every process improvement project. Many people working with process mining are looking for quantifiable results that they can use to compare processes, and to evaluate the effectiveness of their improvements. So, what exactly can you measure with process mining?

Rather than giving you the one magic metric — which, I am sure you have guessed already, doesn’t exist — Anne will give you a deep-dive into the world of metrics: What constitutes a good metric? What are the pitfalls? At camp, you will learn which kind of questions you can answer with process mining, how you can quantify your results, and what you should pay attention to.

See you at camp!

We are super excited about the program (see an overview of all speakers at the process mining camp website here), and we can’t wait to meet you all in Eindhoven next Monday!

If you can’t make it, you can sign up to receive notifications about future process mining camps here.

There are no comments for this article yet. Add yours!
Applications from Healthcare to Government At This Year’s Process Mining Camp

Register for Process Mining Camp 2015!

Less than a week until we all meet at this year’s Process Mining Camp! If you don’t have a ticket yet, you can still get yours on the Process Mining Camp Website.

We have never had such a broad range of use cases and topics at Process Mining Camp before: At this year’s camp, we already have deep-dive talks on a municipal government process improvement case and step-by-step instructions for how to get event log data out of any database. We will also cover IT service processes and manufacturing processes.

Today, we are happy to add the healthcare domain and the new system development use case with our next two speakers: Bart van Acker will share how process analysis can be improved with process mining at Radboudumc hospital, and Edmar Kok will show you how process mining was used in the develpment and after launch of a newly minted process system used at the Dutch ministry of Education.

Bart van Acker, Radboudumc

Bart van Acker

Process Analysis in Healthcare With Process Mining

There has been a lot of discussion1 about the challenges that our healthcare systems are facing, because of the aging population and increasing costs. Process improvement (while maintaining or improving quality of care) is therefore very important to keep pace with these developments.

Radboud university medical center is an academic hospital that is quite advanced in their adoption of electronic patient record systems, among other things, but process analysis and improvement remains as big a challenge as in all other hospitals as well. Bart is a process improvement expert working with the medical staff in different areas at Radboudumc, and he encounters these challenges on a daily basis.

At camp, Bart will share the specific difficulties of process analysis in healthcare. He will show the benefits that process mining can bring to the improvement of healthcare processes based on the example of the Intensive care unit and the Head and Neck Care chain at Radboudumc.

Edmar Kok, DUO

Edmar Kok

Using Process Mining In An Event-Driven Environment

Edmar worked for a project team at DUO, the study financing arm of the Dutch Ministry of Education, to help set up a new event-driven process environment. Unlike typical workflow or BPM systems, event-driven architectures are set up as loosely-coupled process steps (which can be either human or automated tasks) that are combined in a flexible way. The new system was introduced with the goal to improve the speed of DUO’s student finance request handling processes and to save 25% of the costs.

At camp, Edmar will walk you through the specific challenges that emerged from analyzing log data from that event-driven environment and the kind of choices that they had to make. He will also discuss the key metrics DUO wanted to monitor from a business side. You will learn how process mining can be used to very quickly uncover technical errors in the pilot phase of a new system, as well as gain transparency in the business KPIs for the new process.

Stay Tuned

Stay tuned for the last update on the speaker line-up of this year’s process mining camp tomorrow! Sign up to receive notifications about camp updates here.


  1. For example, see this interview with Wil van der Aalst by Chuck Webster on the opportunities of process mining for healthcare.  
There are no comments for this article yet. Add yours!
Learn Best Practices From Your Peers At Process Mining Camp

Register for Process Mining Camp 2015!

The final preparations for Process Mining Camp are in full swing, and the t-shirts of the early birders are on their way! We are super excited about the many people who have already signed up. If you don’t have a ticket yet, you can still get yours on the Process Mining Camp Website.

Today, we are happy to announce our next two speakers: Willy van de Schoot from Atos Managed Services will deliver a deep-dive talk that will help you manage your own process mining analysis. And Joris Keizers, operations manager at the precision metal manufacturer Veco, will get to the bottom of what process mining has to add to the classic Six Sigma methods widely used in process improvement at production companies today.

Willy van de Schoot, Atos Managed Services

Willy van de Schoot

How to manage your process mining analysis

Atos is a a digital services company, which — in its Managed Services sector — hosts IT infrastructure and manages processes (like the handling of incidents and changes) for their enterprise customers. As a former process manager in the IT Services area, Willy knows exactly how challenging it is to balance conflicting goals like standardization and accommodating custom requirements. Processes are critical in this space.

Willy is now a process mining analyst and has worked intensely on the analysis of the incident and change management processes over the past six months. As an analyst, you face a set of completely different challenges: Process mining provides endless possibilities, but how do you stay on top of your different analysis views, new questions that emerge, and data issues? And once you need to present your results to an audience unfamiliar with process mining, how do you communicate your findings and keep everyone on board?

At camp, Willy will share some tips that have worked for her to keep track of her own analysis and deliverables. In a hands-on segment, she will also show the different views she has taken, as well as some tricks of how to prepare the data in such a way that it provides optimal flexibility. You can look forward to a deep-dive talk that will help you stay more organized in your own analysis, and that will broaden your view on the different perspectives that you can take in your process mining analysis.

Joris Keizers, Veco

Joris-2015

Leveraging Human Process Knowledge via Process Mining

Most of the processes that are currently analyzed with process mining are from the services area. But production processes can be analyzed as well. In fact, the Six Sigma and Lean Six Sigma movements that are so commonly used as a process improvement methodology today have originally emerged from the improvement of manufacturing processes.

Veco is a precision metal manufacturer. With more than 15 years of experience in supply chain management, Joris is the operations manager and Six Sigma expert at Veco. He has used Minitab to statistically analyze the processes and drive improvements. When he discovered process mining, he found that process mining can leverage the human process knowledge in a powerful way that classical Six Sigma analyses can’t.

At camp, Joris will show a side-by-side comparison based on a concrete example of a Six Sigma and a Process Mining analysis and explain the differences, benefits, and synergies.

Stay Tuned For More

Stay tuned for further updates on the speaker line-up of this year’s process mining camp! Sign up to receive notifications about camp updates here.

There are no comments for this article yet. Add yours!
Take A Deep Dive At This Year’s Process Mining Camp 1

Get your ticket for Process Mining Camp 2015 now! width=

Only 2 weeks to go to Process Mining Camp on 15 June! If you have not secured your ticket yet, you can do so on the Process Mining Camp Website.

Deep-Dive Talks

In the past years, our camp practice talks were mostly between 15 and 25 minutes, which is often enough to get a point across but does not leave much room to go deeper into the topic. The feedback that we received from campers was that sometimes they wanted to see in more detail, and more hands-on, how the speakers achieved what they were talking about.

Therefore, this year we introduce deep-dive talks for the first time. In a deep-dive talk the speaker gets up to 45 or 60 minutes time to really show things in more detail. We will also have a number of shorter talks focusing on a specific process mining aspect. This should give us a great mix with both a wide breadth of topics, and also the necessary depth.

Today, we are happy to announce the first two speakers. It is a very special pleasure to welcome back two process mining pioneers who have been with us in the very first Process Mining Camp in 2012: Léonard Studer from the City of Lausanne and Mieke Jans, back then with Deloitte, and today assistant professor at Hasselt University. They will both deliver deep-dive talks that give you practical advice for your own process mining projects.

Léonard Studer, City of Lausanne, Switzerland

Léonard Studer

What it means to study a too lengthy administrative process

Administrative processes are typically based on public laws and regulations. As such, you might think that they must be quite simple and well-structured, especially when compared to customer journey or hospital processes. The truth, though, is that administrative processes can become very complicated as well.1

Léonard and his colleague Ines analyzed the construction permit process at the City of Lausanne, which is regulated by 27 different laws from Swiss federal law, cantonal law, and communal regulation. It takes an average of six months to obtain a construction permit in Lausanne, from when the demand is filed. The administrative and technical employees already handle a heavy workload, while external clients like architects and construction businesses have put pressure on the public works department to speed up the process.

The objective of the study was to identify bottlenecks and inefficiencies in the process, of course without changing or removing any of the legally required steps. Léonard will take us on a journey through the project, with all its challenges, highlights, and findings. One of the problems was that there was no proper activity name and Léonard will show hands-on how he used text mining to pre-process the data.

Mieke Jans, Hasselt University, Belgium

Mieke Jans

Step-By-Step: From Data to Event Log

People often ask us about how to extract process mining data from ERP systems like SAP and Oracle. Our typical recommendation is that you work with pre-existing templates from one of our technology partners specialized on those systems. But what if you want to extract the data yourself?

Apart from some quite old and dusty Master’s theses, there is very little information available. Everyone needs to start from square one, basically re-inventing the wheel over and over. The challenge in extracting process mining data lies not only in finding the right data in the thousands of business tables, but there is a whole range of other questions2 that have a direct impact on the suitability and quality of the data that you get.

Mieke created her first event log from a relational database already eight years ago as part of her PhD research, and she has elaborated her experience later in industry. By now, she has created dozens of event logs from different relational databases (from standard ERP systems to custom-made legacy systems). In this talk, Mieke will walk you through a step-by-step approach for extracting a good event log from any relational database, which has never been published before. Based on illustrations of each step you will learn about the implications of your decisions, and you will get a unique head start for the next time you need to extract process mining data from a database. You will also get a handout with a checklist to take home, so that you don’t have to take too many notes.

Stay Tuned For More

We are just getting started, stay tuned for further updates on the speaker line-up of this year’s process mining camp! Sign up to receive notifications about camp updates here.


  1. Have you looked at the building permit processes from the four municipalities of this year’s BPI Challenge? They are insanely complex!  
  2. For example, earlier we have written about the challenge of many-to-many relationships here — but there are many more.  
There are 1 comments for this article.
Meet The Process Mining Family

Process Mining Camp 2015 -- Get your ticket now!

There are less than 3 weeks until we meet at Process Mining Camp on 15 June. If you have not secured your ticket yet, you can still get one on the Process Mining Camp Website. Don’t wait too long, they are going fast.

We are working hard to finalize the program, and we will start announcing the first speakers here on this blog very soon. Stay tuned!

The Hallway Track

Today, we want to spend some time on the social part of this year’s camp. If you have already registered, you were asked during registration whether you plan to attend the process mining party in the evening, and the breakfast in the morning after camp. Both the party and breakfast are new to this year’s camp. So, why did we add them and what can you expect?

Process Mining Camp is the annual meetup of process mining practitioners, where you can—once a year—meet the community of likeminded folks who are just as enthusiastic about process mining as you are. For many of you, starting to introduce process mining in your company is a new avenue, with many questions coming up along the way. The best way to fast-track your progress, and to gain more confidence in your approach, is to exchange experiences with others who are going through that same process. And if you are an experienced practitioner of many years, you don’t normally know that many people who are on the same level as you are, people who you can still learn from.

Of course the Process Mining Camp program itself is designed to give you, newcomers and experts alike, lots of inspiration and tips to think about and use for your own practice. At the same time, the most valuable insights often come from hallway-track conversations with your peers, where you can share your story and get direct feedback as well. So, the actual chance to talk to other campers face to face is just as important as the program itself.

The thing is that, even if you have lots of breaks during the main conference program, everyone first needs to get up from their seat, find their way to the coffee corner, and settle in — and once you have found someone interesting to talk to… time is up!

That is why we have thought about how we can create more room for meaningful discussions between campers, besides the conversations that you will have in the breaks of the program.

Process Mining Camp Party

Process Mining Party

After the Process Mining Camp conference program is over, people will have different needs: Some of you will want to have a quick snack and get some rest (or catch up on email) in their hotel room. Others will want to join a small group over dinner in town. We can help you form groups and give tips for where to go.

After dinner, we will meet up again in a nice bar, in the city center of Eindhoven. You can expect a relaxed atmosphere, great music, and the chance to get to know each other over a refreshing beer, wine, or soft drink. We will have a special friend as a DJ over from Berlin, and we are really looking forward to a great evening rounding off the camp day together. Even if you are living in the area and do not plan to stay overnight, you should definitely come!

Process Mining Camp Breakfast

Grand Cafe Usine

For those of you who stay overnight, there will be a goodbye breakfast in a nice, spacious café (built in a former Philips factory) in the city center on Tuesday 16 June in the morning. Have as much breakfast, coffee and tea as you want, catch up with that one person who you did not get a chance to talk to yet, and get on your way home fresh and awake.

If you are participating in the co-located 2-day process mining training on 16/17 June: We will start a little later on day 1 and leave from the breakfast together.

The Timetable

So, here is a roundup of the timetable for your travel planning. We can’t wait to see you all on 15 June!

Monday 15 June

09:00 Registration and Coffee
10:00 Camp opening
18:00 Camp closing
21:00 Process Mining Party

Tuesday 16 June

09:00 — 12:00 Process Mining Breakfast

There are no comments for this article yet. Add yours!
Watch Recording of Webinar on Process Mining for Customer Journeys 1

One really interesting application area for process mining is customer journey analysis. In customer journey mining, process mining is used to understand how customers are experiencing the interaction with a company, and how they are using their products. For example, you may want to see how your customers are navigating your website or are interacting with your web application.

We recently recorded a webinar on that topic, together with our friends at UXsuite. You can watch the recording of the customer journey process mining webinar by clicking on the video above.

If you analyze processes from a customer perspective (often across multiple channels such as phone, web, in-person appointments, etc.), you typically face a lot of diversity and complexity in the process, but also issues of data handling and automation.

Some of the typical customer journey analysis challenges are:

Watch the webinar recording to learn more about these challenges, and to see how they can be addressed through an iteration of data preparation and process analysis steps.

What you will learn:

If you have seen a process mining introduction before, you can jump directly to the customer journey examples and challenges part here.

Are you thinking about analyzing customer journeys for your company now or in the future? Request to receive the slides from the video (including step-by-step screenshots from the live demo) and information on how to get started immediately with UXsuite and Disco at http://uxsuite.com/airlift-integration/. We also offer evaluation packages that make it easy for you to try out customer journey analysis with UXsuite and Disco for your own process. Get in touch!

There are 1 comments for this article.
« Newer posts
Older posts »