You are reading Flux Capacitor, the company weblog of Fluxicon.
Here, we write about process intelligence, development, design, and everything that scratches our itch. Hope you like it!

Regular Updates? Use this RSS feed, or subscribe to get emails here.

You should follow us on Twitter here.

How To Deal With Incomplete Cases in Process Mining

[This article previously appeared in the Process Mining News – Sign up now to receive regular articles about the practical application of process mining.]

Before you start with your process mining analysis, you need to assess whether your data is suitable for process mining and check your data for data quality problems (see also our Data Quality series here). Afterwards, one of the next steps is to understand how you can differentiate between complete and incomplete cases in your process.

An ‘incomplete case’ is a case where either the start or the end of the process is missing. There can be different reasons for why a case is incomplete, such as:

  1. Your data extraction method has retrieved only events in a certain timeframe. For example, let’s say that you have extracted all the process steps that were performed in a particular year. Some cases may have actually started in the previous year (before January). Furthermore, some cases may have started in the year that you are looking at but continued until the next year (after December). In this situation, you will only see the part of these cases that took place in the year that you are analyzing.
  2. Some cases have not finished yet. Even if you have extracted all the data there is, some of the cases may not have finished yet. This means that, if you are extracting your process mining data today, some of the cases may have started recently and did not yet progress until the end of the process. They are still “somewhere in the middle”. If you would wait for a few weeks with your data extraction, then these cases would probably be finished, but then there might be new ones that have just recently started!
  3. Some cases might never finish. You may have a clear picture of how your process should go. But a customer might not get back to you as you expected, a supplier might never send you the data that was needed to sign them up, or a colleague might close a case in an unexpected phase, because there was an error, a duplicate or another problem with it detected. These cases do not end at any of the expected end points, but they will never be finished even if you waited for ages. The same can be true for the start points.

Looking for incomplete cases is a standard step that you should always take before you dive into your actual process mining analysis. In this four-part series, we will give you clear guidelines for how to deal with incomplete cases.

The following topics will be covered:

Let’s get started!

Why Incomplete Cases Can Be Problematic

At first, it might not be obvious why incomplete cases are a problem in the first place. This is what the data shows, so my process mining analysis should show what actually happened, right?

Wrong. At least as far as incomplete cases are concerned: If your data has incomplete cases because of Reason No. 1 or Reason No. 2 (see above), then these missing start or end points are not reflecting the actual process, but they occur due to the way that the data was collected.

Take a look at the customer refund process picture below: The dashed lines leading to the endpoint (the square symbol at the bottom of the process map) indicate which activities happened as the very last step in the process. For example, for 333 cases ‘Order completed’ was the very last step that was recorded – See (1) in Figure 1. This seems to be a plausible end point for the process. However, there were also 20 cases for which the activity ‘Invoice modified’ was the very last step that was observed – See (2) in Figure 1. This does not seem like an actual end point of the process, does it?

Figure 1: Cases ending with Order completed (1) seem to be finished, but cases where Invoice modified was the last step that happened (2) might still be ongoing?

If we look up an example case that ends with ‘Invoice modified’ (see Figure 2), then we can see that the ‘Invoice modified’ step indeed happened just before the end of the data set. It occurred on 20 January 2012 and the data set ends on 23 January 2012. What if we had data until June 2012? Would there have been any steps after ‘Invoice modified’ then?

Figure 2: If an incomplete case stops at a particular point, it could just mean that we have not yet observed the next step.

So, we can see that not all end points in the data necessarily need to be meaningful endpoints in the process. Some cases can be incomplete, just because we are missing the end or the beginning of what actually happened, either because of how the data was extracted or because we don’t know yet what is going to happen with cases that are still ongoing. When you look at your process map, or the variants, for a data set that includes incomplete cases then the map and the variants do not show you the actual start and end points in your process but the start and end points in your data.

Another problem with incomplete cases is that their case duration can be misleading. The process mining tool does not know which cases are finished and which are incomplete. Therefore, it always calculates the case duration as the time between the very first and the very last event in the case.

As a result, the case durations of incomplete cases appear shorter in the process mining tool than the throughput time of the cases they represent has actually been. Let’s take a look at another example case in the process to understand what this means (see Figure 3). The shown Case72 seems to be very fast. There were just two steps in the process so far (‘Order created’ and ‘Missing documents requested’) and it took just 3 minutes.

However, when you consider that ‘Missing documents requested’ is not the actual end point of this process (we are just in an intermediate state, waiting for the customer to send us some additional information) and we look at the timeline of where this case sits, then we can see that this case has been open for more than 1 month. So, the true throughput time of this case (so far) should be at least 1 month and 3 minutes!

Figure 3: Incomplete cases can appear much faster than they really are.

If you simply leave incomplete cases in your data set, then calculations like the average or median case duration in the statistics view of your process are influenced by these shorter durations. So, not only the process map and the variants are influenced by incomplete cases but also your performance measurements are impacted.

Therefore, you need to investigate incomplete cases in your data before you start with your actual analysis. You want to understand what kind of incomplete cases you have and how many there are. Then, you want to remove them from your data set before you analyze your process in more detail. You can do all this right in Disco and in the remainder of this series we will show you how to do it.

Finally, some data sets may be extracted in such a way that there are no incomplete cases in it. For example, you may have received a data set from your IT department that only contains closed orders. So, any orders that are still open do not show up in your data.

In this situation, you don’t need to remove incomplete cases anymore. However, you should realize that you do not have visibility into how representative your data set is with respect to the whole population of orders. Understanding how many cases remain after removing your incomplete cases is an important step. Be aware of this limitation and consider requesting the set of open cases from the same period in addition to your current data set to be able to check them and to make sure you get the full picture.

There are no comments for this article yet. Add yours!
Recap of Process Mining Camp 2017

With more than 220 campers from 24 countries across the world, Process Mining Camp 2017 was filled up to the brim. The atmosphere was amazing. It is only once a year that you can meet so many other process mining enthusiasts to talk shop and to learn from them about their experiences.

Opening Keynote

Anne Rozinat, co-founder of Fluxicon, opened this year’s camp by celebrating the 5th anniversary of Disco. The recently launched Disco 2.0 introduces TimeWarp, one of the most frequently requested features of all times. With TimeWarp it is now possible to exclude non-working days (like weekends and holidays) as well as non-working hours from your process mining analysis. Take a look at this video to learn how TimeWarp works and how easy it is to make your performance analyses even more precise.

In these 5 years, we have made a lot of friends in the process mining community. From every conversation we learn something. And we understand that process mining is not just a tool but it is a discipline that needs practice to master it. Therefore, we are happy to collaborate with more than 475 academic partners to educate the future process miners. And we are putting a lot of work into sharing our knowledge and helping you all to become the best process miners of the world. As a surprise to the campers, we were proud to announce the online pre-release of our new book Process Mining in Practice at processminingbook.com.

Remco Bunder & Jacco Vogelsang – Dutch Railway

Remco Bunder and Jacco Vogelsang from the NS (Dutch Railway) kicked off with the first talk of the day. Their journey with process mining started exactly one year ago. As visitors, they were inspired by what they saw at Process Mining Camp 2016. Back at their desk they started to experiment with process mining by analyzing all the datasets they got their hands on. Using process mining, they were able to show that it would save a lot of time and effort to wait a few more days before emptying abandoned station lockers. They also noticed that some of the OV bikes that where reported as stolen where actually not stolen at all. These experiments formed the basis for the inspiration and engagement of their colleagues, which resulted in new initiatives and projects that are being launched right now.

Sebastiaan van Rijsbergen – Nationale Nederlanden

Sebastiaan van Rijsbergen was the second speaker of the day. He recognizes the challenges of introducing something as innovative as process mining within an organization. He was very excited when he started with his first process mining project at Nationale Nederlanden. But once he started to share concrete results, he noticed that politics entered the arena very quickly. He got pushback because his results were not always aligned with the viewpoints of all stakeholders. For example, for one process the operational teams experienced a lot of variation — while IT was managing a Straight Through Process. With process mining, it was ultimately possible to get a deeper understanding of how the process was actually working and to take both perspectives into account. In fact, it turned out that they were both right! And focusing on the facts actually brought some peace into the discussion that was not there before.

Wilco Brouwers & Dave Jansen – CZ

Wilco Brouwers and Dave Jansen, from the health insurance company CZ, shared their process mining experience as IT auditors. They see that digital transformation is slowly impacting their work as auditors. They believe that IT skills will become increasingly important for future IT auditors — not only to be more efficient, but also to be more effective. As the frontrunners within their team they have developed a new approach for auditing their digital processes of the future. Process mining plays an important role in this new auditing approach. With concrete examples, they showed where they see differences compared to the traditional approach in the preparation, fieldwork, reporting, and follow-up steps in their audits.

Gijs Jansen – Essent

Gijs Jansen, business intelligence specialist at energy supplier Essent, was the fourth speaker of the day. A few years ago, he was asked by the business manager to create a snake plot and a calculate the ping-pong factor. Too proud to admit that he had no idea what they were talking about he started to investigate. He became aware that the existing reporting didn’t answer detailed questions about the processes. For example, why are we losing so much money in the payment collection process. With process mining, he was able to show that the termination of contracts took too long. By visualizing the problem, he was able to engage the teams to dive into the bottlenecks, to understand the actual root causes. He learned that with reporting you can get to a certain level, but the visualizations of process mining in combination with domain knowledge are extremely powerful. Therefore, process mining proved to be so much more meaningful than just a snake plot and a ping-pong factor.

Roel Blankers & Wesley Wiertz – VGZ

The fifth speakers of the day, Roel Blankers and Wesley Wiertz, showed how they can speed up continuous improvement within healthcare insurer VGZ with process mining. They are able to solve operational problems much quicker by combining Lean tools with process mining. Using process mining, they were able to visualize the flow of the dental care process within weeks. This directly pointed them to the bottlenecks, and it showed them that there were long waiting times when the work was handed over from medical advisors to experts and vice versa. By applying the traditional Lean tools, such as 5x Why, they were able to pinpoint the actual root causes. In this way, they were able to reduce the throughput time by 40%. Medical advisors and experts now work much closer together. Especially tracking and evaluating this behavioral change makes process mining a very powerful tool for a Lean expert to check the effect of their changes.

Mick Langeberg – Veco

Mick Langeberg, supply chain manager at Veco, has experienced that process mining is very useful for Lean Six Sigma practitioners. At Process Mining Camp 2015, Veco had already shown how they were able to reduce the production lead time from 10 weeks to 2 weeks. But they didn’t stand still and continued to find new opportunities. By extending the data to include the customer touchpoints, they were able to visualize the journey of the customer. Looking into the visualization, a new product development process was discovered. Instead of only producing a sample, in the new product development process pieces needed to be designed, produced and delivered quickly. By shifting priorities, Veco was able to produce customer samples quicker without impacting the regular production lead times. This allows Veco to grow their business, while keeping up the delivery performance for their existing customers.

Process Miner of the Year 2017

At the end of the first day, Carmen Lasa Gómez (right on the photo) from Telefónica was announced as the Process Miner of the Year 2017. Together with process owner Aranzazu García Velazquez (left on the photo) they received the prize and presented how they discovered operational drifts in their IT service management processes with process mining. We will share their winning contribution with you in more detail in an upcoming, dedicated article.

Second Day: Workshops

On the second day of camp, 108 process mining enthusiasts joined one of the four workshops. Joris Keizers, Process Miner of the Year 2016, facilitated a workshop to understand the impact of data quality and how tools of Six Sigma can be of help. Mieke Jans, assistant professor at Hasselt University, guided the participants through seven steps to create an even log from raw database tables. Rudi Niks led a discussion of what combination of skills and characteristics make a process miner successful. Anne Rozinat showed participants how to answer 20 typical process mining questions.

We would like to thank everyone for the wonderful time at camp, and we hope to see you again next year!

Sign up at the camp mailing list to be notified about next year’s camp and to receive the video recordings from this year.

There are no comments for this article yet. Add yours!
Disco 2.0 4

Software Update

It is our pleasure to announce the immediate release of Disco 2.0!

There are many changes and improvements in this release, most of which were informed by your suggestions and feedback. But the marquee feature of Disco 2.0 is TimeWarp, which allows you to incorporate business days and working hours into your process mining analysis.

TimeWarp

Being able to specify working days and working hours must be one of the most frequently requested features that we have received for Disco so far. With Disco 2.0, we now make it possible to include working days and working hours into your process mining analysis in the most humane way. We are super excited about TimeWarp, and we can’t wait to hear about what you will do with it!

Disco will automatically download and install this update the next time you run it, if you are connected to the internet. If you are using Disco offline, you can download and run the updated installer packages manually from fluxicon.com/disco.

To make yourself familiar with the TimeWarp functionality in Disco 2.0, you can watch the short video above. Please keep reading if you want the full details of what we think is a great update to the most popular process mining tool in the world.

The Trouble with Time

Support for business hours and holidays in Disco has been one of the most frequent requests we get from our customers. With TimeWarp, we think we have finally come up with the perfect solution to a tricky problem.

Unfortunately, time is a very human and, thus, kind of a messy construct. We have daylight savings time in many parts of the world, but everywhere it is handled in a different manner. There are leap years, and leap seconds, synchronizing our “official” notion of time with their astronomical references. And, to top it off, we have widely differing ideas about which days of the week, and which days of the year, are supposed to be “work days”, and when the office stays closed.

This means that a simple question like “How much time passed between 12 February and November 4” can have very different answers, depending on the year and the location in question. And if you would like the answer in business hours, it gets even more complicated. If you need the precise duration in every case, you will need to consider every exception and edge case, which can become very computationally expensive and slow at scale.

In Disco, we calculate a lot of durations for many purposes. They are the basis for the excellent performance analysis capabilities Disco provides, and power many more features like our best-of-breed mining algorithm. Since many of our customers use Disco with huge amounts of data, using a very precise but slow method of calculating durations is out of the question. Using a trivial but precise measurement method could have meant that a one-minute analysis would have turned into half an hour or more.

On the other hand, we really do want absolute precision for duration measurements. If you have only Monday through Friday as working days, simply multiplying every duration with 5/7 will be pretty fast, but it is also quite useless if you want to precisely measure SLAs.

With TimeWarp, we have found a way to square that circle. The duration measurement engine in TimeWarp is precise to the millisecond, while at the same time it is blazingly fast. There is no need for you as a user to make the trade-off between precision and performance, because you can truly have it all. This means that you can now perform business hours-aware performance analyses with Disco on huge data sets, with negligible impact on performance. We think you are going to love TimeWarp, as it keeps perfectly with the Disco tradition of providing guaranteed scientifically accurate results, reliably, with record speeds.

The Limitation of Calendar Days

When you look at Service Level Agreements (SLAs) in your organization, then you will see that many of them recognize that there are certain days on which people don’t work.

It would not be fair to consider a customer request that was initiated on Friday and answered on Monday in the same way compared to one that was raised on Monday and answered on Thursday. People recognize that their banks, insurance companies, municipalities, and other organizations have weekends, too. So, the weekends should not “count”.

But process mining evaluates the timestamps in your data set and, naturally, uses these timestamps to calculate all the performance metrics like case durations, waiting times, and other process-specific KPIs in calendar days.

For example, let’s look at the following credit application process. The internal SLA for the operational unit is 3 business days. This means that the time between the ‘Credit check’ activity and the outcome (which can be ‘Approved’ or ‘Rejected’, or ‘Canceled’ if the application was withdrawn by the customer) should not be longer than 3 days.

We can add a Performance filter to check this SLA in our process mining analysis (see below).

Performance filter in Disco

When we look at the result of the Performance filter then it appears as if 53 % of all cases lie outside of our SLA (see below).

The result of the SLA Analysis is given in calendar days

But these 53% are based on measuring the case durations in calendar days, while the SLA that we want to measure is 3 business days. This is a big problem, because there are cases that actually meet the SLA in terms of business days but they appear in the 53% because there was a weekend in between. So, the true number of cases that meet the SLA is unknown.

SLAs are not only internal guidelines. For example, outsourced processes are managed through contracts that include one or more SLAs. There may be financial penalties and the right to terminate the contract if any of the SLA metrics are consistently missed. However, you cannot fully analyze a process with contractual SLAs in business days if all you can measure are calendar days.

The problem with the desire to measure business days in process mining is that you can’t really work around this problem in an easy way. You can’t change the timestamps, because the timestamps indicate when something truly happened.

You can calculate the business days outside of the process mining tool (typically, this involves programming). But you can do this only for a specific pair of timestamps, from which to which the time should be measured. However, the power of process mining comes from the ability to take different perspectives, and to be able to leave out activities to focus on the process steps that you are interested in in a flexible way. You completely lose that flexibility if you pre-calculate working days in your source data set.

So, what we have done with TimeWarp is to bring the ability to analyze your process based on business days and working hours right into Disco.

Let’s take a look at how this works!

Removing Weekends

To analyze the credit application process from above in business days, we need to remove the weekends.

To do this, you can click on the new TimeWarp symbol in the lower left corner (see below).

Add TimeWarp

You are then brought into the TimeWarp settings screen, where you can enable TimeWarp (see below).

Enable TimeWarp to analyze business days

As soon as you have enabled TimeWarp, you will see the calendar view of a week — from Monday on the left to Sunday on the right. TimeWarp pre-fills the week days with a green working time period from 8am until 6pm and indicates Saturday and Sunday as closed. But you can change the TimeWarp settings to match your own working day requirements.

For example, to analyze the credit application process based on business days, all we want to do for now is to remove the weekends. As for the week days, we want to fully count them. So, we adjust the week day periods that should be counted by TimeWarp to stretch the whole day from midnight to midnight.

To adjust all week day periods at once, you can click and move the Monday timeframe. All the other week days will be adjusted accordingly (see below).

Change the boundaries of all working days at once by pulling on Monday

Now, we want to save this as a new analysis in our project, so that we can compare the outcome to the previous analysis. We click the ‘Copy and apply’ button and give the new analysis a short name that indicates that we are now measuring the SLA based on business days (see below).

Save your TimeWarp data set as a copy to compare with the calendar day analysis

After pressing the ‘Create’ button, we can now see that not 53% but just 41% of the cases are outside the SLA if we remove the weekends from our analysis!

The result of the SLA analysis is now given in business days

This is great, because we now have the true number for the SLA measurement. Furthermore, every case in our analysis result is truly in violation of the SLA, so the information that we provide to the process owner will be more actionable for them in their root cause analysis.

Removing Holidays

In fact, we need to do one more thing if we want to be precise: There are not only weekends but also public holidays on which people don’t work. These holidays should also not be counted in our SLA measurement.

We can easily add a holiday specification to our TimeWarp settings in the following way.

We click on the Timwarp symbol in the lower left corner again and then press the ‘Bank holidays’ button in the lower right. A list of countries will be displayed and we choose the Netherlands as the country from which we want to add the holiday specifications (see below).

Select the country from which you want to add the holidays

After we have pressed the ‘Select’ button, all holidays in the time period covered by our data set are added automatically to the list of holidays on the right. The data set that we are analyzing is covering the credit application process from February 2012 until June 2012. So, we can see that holidays such as the Easter holidays in this period have been added automatically (see below).

Holidays that fall into the timeframe of your data set will be added

If your organization has some additional days that are free and should not counted, or if some of the public holidays in your region are actually a working days for your organization, you can also manually add and remove holidays right there, but the pre-populated list is a great start.

After you click the ‘Apply settings’ button, we can see that by removing the holidays from business day measurements, the number of cases that lie outside the SLA of 3 business days actually dropped to 40%. That’s a big difference to 53% from the initial calendar-day based measurement!

As a result, not only weekends but also holidays are removed from your SLA calculations

Analyzing Working Hours

Sometimes, you do not only want to remove weekends and holidays, but you actually want to take into account the working hours as well.

For example, in the front office part of the credit application process, customers can submit applications online and through the phone, and the ambition for the bank is to provide a fast initial response to the customer.

If we look at the durations in the process map, then we can see that it takes 29.3 hours on average between the call and the pre-approval (see below). The SLA for this part of the process is 8 working hours. However, like in the example above, the durations calculated by the process mining tool are based on calendar days.

To take the working hours of the front office team into account, we enable TimeWarp for the data set. The front office team in the callcenter works from 7am to 9pm on weekdays and from 8am to 6pm on Saturdays. In the default settings, Saturdays are closed. But you can click on the ‘Closed’ badge at the top to include a weekend day as a working day (see below).

We then continue to set the right time table to indicate the right working hours of the different days of the week for the callcenter team in the front office (see below).

We can see that the average durations in the process map have changed (see below). Instead of 29.3 hours it just takes 14.2 hours on average between the call and the pre-approval. The average times have changed, because times between shifts (for example, between 9pm of a weekday and 7am of the next weekday) are not counted.

This is now the right basis for our SLA analysis. To check how many cases take more than 8 working hours between the call and the pre-approval, we can simply click on the path in the process map and use the shortcut ‘Filter this path…’ to add a pre-configured filter (see below).

In the Follower filter that was automatically added to our data set we can add an in-process SLA by enabling the ‘Time between events’ option. We set the filter setting to ‘longer than 8 hours’ to indicate that we are interested in all cases that are not meeting our SLA (see below).

Disco will now automatically take our working hour specification into account to filter the data set based on the 8 working hours SLA. After applying the filter, we can see that 46 % of the cases do not meet our 8 working hours SLA (see below).

The working hours that we configured for the different weekdays in TimeWarp are essential to perform this SLA analysis on the right basis. If we would remove the TimeWarp settings for this data set again, we would see that ca. 10 % more cases would be included in our filter result as a false positive.

For example, if a call came in at 8:30pm on a Friday and the offer was ready at 7:30am on Saturday, then without TimeWarp this would be counted as 11 hours (above the 8 hour SLA limit). However, with the right TimeWarp settings enabled, it will be correctly counted as just 1 hour!

Full Transparency for Your Analysis Process

In addition to the new TimeWarp functionality, there are also some changes and additions that will be very useful for all process mining analysts but that are particularly exciting for auditors.

As an auditor, you have the requirement that you need to generate an audit trail for your analysis. This means that there should be a way to fully document all the steps that you have taken, so that other people are able to follow your steps and repeat your analysis. Since Disco 1.9, auditors already have an audit trail with the Audit report export in Disco. But in Disco 2.0 we take the traceability one step further.

You can now fully document how you arrived at your analysis result from the source data to the end result in addition to saving your project files with the full Disco workspace.

1. Import configuration

When you import your data set into Disco, you can choose a process perspective. And in many situations, you will actually look at your process from different angles.

To keep track of the perspective that you have chosen during the import, you previously had to manually document which columns were configured as Case ID, Activity, Timestamps, etc. Disco 2.0 now does this for you by adding the import configuration settings to the ‘Notes’ section of you data set (see below)

2. Permanent filters

When you clean your data set of data quality issues, or focus on a part of the process as a new baseline, you use the ‘Apply filters permanently’ option in the ‘Copy and filter’ settings (or the ‘Copy’ settings of your data set). As a result, all the filters will be applied but the outcome of the filtering step will be available as a clean, new data set and the percentage will be re-set to 100%.

However, sometimes it is important to keep track of which filters were previously applied in a permanent way to keep the full visibility of how you arrived at a certain analysis from the source data.

Disco 2.0 now adds the summary of the permanently applied filter settings to the end of your ‘Notes’ section in the data set as well (see below). The notes are also included in the audit trail export, so that you have all the steps from your import settings and all the filter steps documented along with any personal notes that you add during your analysis there.

3. Empty data sets as first-class citizen

Sometimes, the data set result after applying a filter configuration will be empty. And this can be a good thing. For example, if you are checking a Segregation of Duty rule for your process, then it is good if such a violation never occurred!

You could already export the audit report for empty filter results before, but now Disco will keep the data set in your workspace along with all other analyses. This way, you can document all your analyses in one place – even if some of them resulted in an empty data set (see below).

4. Export (or delete) multiple data sets at once

When you wanted to document your analyses outside of Disco, then you previously had to export the results for every analysis separately.

With Disco 2.0 you can now select multiple data sets and export, for example, all the PDF process maps, or the audit reports, for all the selected data sets at once (see below). This can also be handy if you want to clean up your workspace and want to delete multiple data sets that you don’t need anymore. They can now be all deleted with one click.

Filter Variants and Cases from the Cases View

Finally, people love the interactivity of Disco and the many short-cuts from the process map and statistics view that make your analysis so fast and productive (see this article on the Disco 1.9 release for an overview of the most important short-cuts).

We frequently heard from you that you would like to have these short-cuts also in the Cases view to quickly filter variants and cases right from there. Disco 2.0 now makes this possible. Simply right-click on the variant, or the case, that you want to filter for and use the short-cut (see below).

Other changes

The Disco 2.0 update also includes a number of other features and bug fixes, which improve the functionality, reliability, and performance of Disco. Please find a list of the most important further changes below.

Finally, we would like to thank all of you for using Disco! Your continued feedback is a major reason why Disco is the best, the fastest, and the most stable process mining tool there is. Please keep sending us your feedback, and help us make Disco even better!

There are 4 comments for this article.
Wil van der Aalst at Process Mining Camp 2016

All tickets for Process Mining Camp on 29 & 30 June are gone! You can get on the waiting list to be notified if a spot becomes available here. If you can’t make it this year but would like to receive the presentations and video recordings afterwards, you can sign up for the camp mailing list here.

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data, Giancarlo Lepore from Zimmer Biomet, Paul Kooij from Zig Websoftware, Carmen Lasa Gómez from Telefónica, Marc Gittler & Patrick Greifzu from DHL Group, Lucy Brand-Wesselink from ALFAM, and Abs Amiri from SPARQ Solutions.

The last speaker at Process Mining Camp 2016 was Prof. Wil van der Aalst from Eindhoven University of Technology. As we have seen in the previous talks, data science, and specifically process mining, can create enormous value. But with great power comes great responsibility. Without taking proper care, the results of a data analysis could negatively impact citizens, patients, customers and employees. This often creates resistance towards these kinds of technologies (for example, laws that forbid to use data in a certain way).

As a data science professional, it is our responsibility to be aware of these new challenges. For example, systematic discrimination based on data, invasion of privacy, non-transparent life-changing decisions, and inaccurate conclusions may lead to new forms of “pollution”. “Green Data Science” is a new data science area that enables individuals, organizations, and society to reap the benefits from the widespread availability of data while ensuring fairness, confidentiality, accuracy, and transparency.

Do you want to apply these principles and be a responsible process miner? Watch Wil’s talk now!

There are no comments for this article yet. Add yours!
Process Mining at Sparq — Process Mining Camp 2016

There are less than a handful of tickets left for 29 June, the main event of Process Mining Camp (see an overview of the speakers here). So, if you are planning to come then be quick and get your ticket now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data, Giancarlo Lepore from Zimmer Biomet, Paul Kooij from Zig Websoftware, Carmen Lasa Gómez from Telefónica, Marc Gittler & Patrick Greifzu from DHL Group, and Lucy Brand-Wesselink from ALFAM.

The seventh speaker at Process Mining Camp 2016 was Abs Amiri from SPARQ Solutions. They provide Information and Communications Technology services to government-owned suppliers like Energex and Ergon Energy in Queensland, Australia. Due to price pressure, there has been an increased need to cut costs and become more efficient.

As a senior analyst, programmer and data science lead, Abs developed new and innovative ways to help Energex and Ergon Energy improve their operations. His approach started with involving leadership to formulate the right questions. Then, he used his data expertise to wrangle the data into a format that was ready for process mining. He continued to validate the data and involved people with domain knowledge. Eventually, he used process mining to understand how the processes actually worked and identified the factors that were causing variation and bottlenecks. By analysing the end-to-end processes with process mining, he was able to create significant benefits for the call dispatching processes.

Do you want to learn how to bridge the gap between business and IT, so that they work closely together to accomplish a common goal? Watch Abs’ talk now!

———

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at ALFAM — Process Mining Camp 2016

There are just a few tickets left for 29 June, the main event of Process Mining Camp. See who is speaking this year here and get your ticket now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data, Giancarlo Lepore from Zimmer Biomet, Paul Kooij from Zig Websoftware, Carmen Lasa Gómez from Telefónica, and Marc Gittler & Patrick Greifzu from DHL Group.

The sixth speaker at Process Mining Camp 2016 was Lucy Brand-Wesselink from ALFAM, a subsidiary of ABN AMRO specializing in consumer credits. One of the challenges of ALFAM is to become more efficient when processing customer loan applications.

As a process manager in the Business Operating Office, Lucy knows that the automation of the loan application process should make the process more efficient. But in practice she sees that departments and teams have been struggling to fulfill the expectations to continue growth.

When reviewing the process with process mining, Lucy became aware that the real process was different than she thought. She expected a straight-through process, but instead was looking at a process with a lot of variation. Only 45% of the cases were processed completely automatically and the manual steps took 120 minutes per case longer than expected. Diving deeper into the data revealed the rework in the process.

Do you want to know which actions ALFAM took based on their insights from process mining? Watch Lucy’s talk now!

———

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at DHL — Process Mining Camp 2016

There are still a few tickets left for 29 June, the main event of Process Mining Camp. See who is speaking this this year here and get your ticket now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data, Giancarlo Lepore from Zimmer Biomet, Paul Kooij from Zig Websoftware, and Carmen Lasa Gómez from Telefónica.

The fifth speakers at Process Mining Camp 2016 were Marc Gittler & Patrick Greifzu from DHL Group, Germany. Marc and Patrick are a Senior Audit Manager and Audit Manager in the Corporate Internal Audit team. Their view is that due to the increasing amount of data and process complexity a sample-based testing approach is no longer adequate.

They did not only analyze the efficiency of the parcel delivery process based on hundreds of millions of events, but they also used process mining to analyze the quality of their own audit process.

Do you want to learn more about how DHL has reduced their audit time by 25% in comparison to classical data analytics? Watch Marc’s and Patrick’s talk now!

———

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at Telefónica — Process Mining Camp 2016

There are still a few tickets left for 29 June, the main event of Process Mining Camp. See who is speaking this this year here and get your ticket now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data, Giancarlo Lepore from Zimmer Biomet, and Paul Kooij from Zig Websoftware.

The fourth speaker at Process Mining Camp 2016 was Carmen Lasa Gómez from Telefónica, Spain. Carmen’s team defines the analytics strategy of the ‘Delivery Operations & Deployment’ area. They also provide analytic capabilities to execute the defined strategy and advice to other units in the Company.

She analyzed incidents and work orders (planned disruptions to install new SW releases, bug fixes or new equipment) from different service areas. One of the problems they found is that a high percentage of work orders was performed outside the scheduled window. Due to their analysis, the percentage of work orders that are out of window could be decreased from 62% on April 2015 to 5% on April 2016.

Service managers from different areas are now asking Carmen’s group to analyse their services as well. One service manager, Sara Gómez Iglesias, concluded Carmen’s talk by sharing her experience with a recent process mining project. Do you want to know more about what Telefónica has achieved with process mining? Watch Carmen’s talk now!

———

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at Zig Websoftware — Process Mining Camp 2016

Registrations for this year’s Process Mining Camp are going fast, already more than 150 tickets are gone. Make sure to reserve your seat now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data and Giancarlo Lepore from Zimmer Biomet.

The third speaker at Process Mining Camp 2016 was Paul Kooij from Zig Websoftware. Zig Websoftware creates process management software for housing associations. At camp, Paul showed how they could help their customer WoonFriesland to improve the housing allocation process by analyzing the data from Zig’s platform. Every day that a rental property is vacant costs the housing association money. But why does it take so long to find new tenants? For WoonFriesland this was a black box. Paul explains how he used process mining to uncover hidden opportunities to reduce the vacancy time significantly.

Do you want to know how Paul managed to reduce WoonFriesland’s vacancy time by 3,500 days within the fist six months? Watch Paul’s talk now!

———

Have you completed a successful process mining project in the past months that you are really proud of? Send it to us and take your chance to receive the Process Miner of the Year award at this year’s camp!

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at Zimmer Biomet — Process Mining Camp 2016

Registration for this year’s Process Mining Camp on 29 June is now open and already more than 100 tickets are gone. Make sure to reserve your seat now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. The first talk was given by Jan Vermeulen from Dimension Data, South Africa. If you have not seen Jan’s talk yet, you can watch it here.

The second speaker at Process Mining Camp 2016 was Giancarlo Lepore from Zimmer Biomet, Switzerland. Zimmer Biomet produces orthopaedic products (for example, hip replacements) and one of the challenges is that each of the products has many variations that require customizations in the production process.

Giancarlo is a business analyst in Zimmer Biomet’s operational intelligence team. He has introduced process mining to analyse the material flow in their production process. In his talk, he explains why it is difficult to analyse the production process with traditional lean six sigma tools, such as spaghetti diagrams and value stream mapping. He compares process mining to these traditional process analysis methods and also shows how they were able to resolve data quality problems in their master data management in the ERP system.

Do you want to know what process mining can do in a high-variation production environment? Watch Giancarlo’s talk now!

———

Have you completed a successful process mining project in the past months that you are really proud of? Send it to us and take your chance to receive the Process Miner of the Year award at this year’s camp!

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Older posts »