You are reading Flux Capacitor, the company weblog of Fluxicon.
Here, we write about process intelligence, development, design, and everything that scratches our itch. Hope you like it!

Regular Updates? Use this RSS feed, or subscribe to get emails here.

You should follow us on Twitter here.

Process Mining at DHL — Process Mining Camp 2016

There are still a few tickets left for 29 June, the main event of Process Mining Camp. See who is speaking this this year here and get your ticket now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data, Giancarlo Lepore from Zimmer Biomet, Paul Kooij from Zig Websoftware, and Carmen Lasa Gómez from Telefónica.

The fifth speakers at Process Mining Camp 2016 were Marc Gittler & Patrick Greifzu from DHL Group, Germany. Marc and Patrick are a Senior Audit Manager and Audit Manager in the Corporate Internal Audit team. Their view is that due to the increasing amount of data and process complexity a sample-based testing approach is no longer adequate.

They did not only analyze the efficiency of the parcel delivery process based on hundreds of millions of events, but they also used process mining to analyze the quality of their own audit process.

Do you want to learn more about how DHL has reduced their audit time by 25% in comparison to classical data analytics? Watch Marc’s and Patrick’s talk now!

———

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at Telefónica — Process Mining Camp 2016

There are still a few tickets left for 29 June, the main event of Process Mining Camp. See who is speaking this this year here and get your ticket now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data, Giancarlo Lepore from Zimmer Biomet, and Paul Kooij from Zig Websoftware.

The fourth speaker at Process Mining Camp 2016 was Carmen Lasa Gómez from Telefónica, Spain. Carmen’s team defines the analytics strategy of the ‘Delivery Operations & Deployment’ area. They also provide analytic capabilities to execute the defined strategy and advice to other units in the Company.

She analyzed incidents and work orders (planned disruptions to install new SW releases, bug fixes or new equipment) from different service areas. One of the problems they found is that a high percentage of work orders was performed outside the scheduled window. Due to their analysis, the percentage of work orders that are out of window could be decreased from 62% on April 2015 to 5% on April 2016.

Service managers from different areas are now asking Carmen’s group to analyse their services as well. One service manager, Sara Gómez Iglesias, concluded Carmen’s talk by sharing her experience with a recent process mining project. Do you want to know more about what Telefónica has achieved with process mining? Watch Carmen’s talk now!

———

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at Zig Websoftware — Process Mining Camp 2016

Registrations for this year’s Process Mining Camp are going fast, already more than 150 tickets are gone. Make sure to reserve your seat now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. If you have missed them before, check out the videos of Jan Vermeulen from Dimension Data and Giancarlo Lepore from Zimmer Biomet.

The third speaker at Process Mining Camp 2016 was Paul Kooij from Zig Websoftware. Zig Websoftware creates process management software for housing associations. At camp, Paul showed how they could help their customer WoonFriesland to improve the housing allocation process by analyzing the data from Zig’s platform. Every day that a rental property is vacant costs the housing association money. But why does it take so long to find new tenants? For WoonFriesland this was a black box. Paul explains how he used process mining to uncover hidden opportunities to reduce the vacancy time significantly.

Do you want to know how Paul managed to reduce WoonFriesland’s vacancy time by 3,500 days within the fist six months? Watch Paul’s talk now!

———

Have you completed a successful process mining project in the past months that you are really proud of? Send it to us and take your chance to receive the Process Miner of the Year award at this year’s camp!

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at Zimmer Biomet — Process Mining Camp 2016

Registration for this year’s Process Mining Camp on 29 June is now open and already more than 100 tickets are gone. Make sure to reserve your seat now!

To get us all into the proper camp spirit, we have started to release the videos from last year’s camp. The first talk was given by Jan Vermeulen from Dimension Data, South Africa. If you have not seen Jan’s talk yet, you can watch it here.

The second speaker at Process Mining Camp 2016 was Giancarlo Lepore from Zimmer Biomet, Switzerland. Zimmer Biomet produces orthopaedic products (for example, hip replacements) and one of the challenges is that each of the products has many variations that require customizations in the production process.

Giancarlo is a business analyst in Zimmer Biomet’s operational intelligence team. He has introduced process mining to analyse the material flow in their production process. In his talk, he explains why it is difficult to analyse the production process with traditional lean six sigma tools, such as spaghetti diagrams and value stream mapping. He compares process mining to these traditional process analysis methods and also shows how they were able to resolve data quality problems in their master data management in the ERP system.

Do you want to know what process mining can do in a high-variation production environment? Watch Giancarlo’s talk now!

———

Have you completed a successful process mining project in the past months that you are really proud of? Send it to us and take your chance to receive the Process Miner of the Year award at this year’s camp!

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining Camp 2017 — Get Your Ticket Now!

On 29 & 30 June, process mining enthusiasts from all around the world will come together for a unique experience for the sixth time. Last year, more than 210 people from 165 companies and 20 different countries came to camp to listen to inspiring talks, share their ideas and experiences, and make new friends in the global process mining community.

For the first time, this year’s Process Mining Camp will run for two days:

Camp Day (29 June)

The first day will be a day full of inspiring practice talks from different companies, as you are familiar with from previous camps. They will share all about their successes, the difficulties they faced, and their best tips and tricks.

Tickets for the camp day include lunch, coffee, dinner, and your camp t-shirt.

Workshop Day (30 June)

On the second day, you can choose between four half-day workshops. Here, smaller groups of participants will get the chance to dive into various process mining topics in depth, guided by an experienced expert.

The workshops take place in the morning. All four workshops will run in parallel (so, you need to choose one). The workshop day will be closed with a lunch buffet and coffee, so that you are ready to start your journey back home.

Sign up now!

Don’t wait too long, because especially the seats for the workshops are limited. To avoid disappointment, reserve your seat right away.

We can’t wait to see you in Eindhoven on 29 June!

———

Have you completed a successful process mining project in the past months that you are really proud of? Send it to us and take your chance to receive the Process Miner of the Year award at this year’s camp!

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

There are no comments for this article yet. Add yours!
Process Mining at Dimension Data — Process Mining Camp 2016

Are you getting ready for this year’s Process Mining Camp on 29 and 30 June? We are currently putting the finishing touches on the program and will open registration later this week. Sign up for the Camp mailing list here to be notified as soon as tickets are available!

To get us all into the proper camp spirit, we will be releasing the videos from last year’s camp over the next weeks. The first speaker at Process Mining Camp 2016 was Jan Vermeulen. As the Global Process Owner at Dimension Data, Jan is responsible for the standardization of the global IT services processes.

At camp, Jan shared his journey of establishing process mining as a methodology to improve process performance and compliance, to grow their business, and to increase the value in their operations. These three pillars form the foundation of Dimension Data’s business case for process mining.

Jan showed examples from each of the three pillars and shares what he learned on the way. The growth pillar is particularly new and interesting, because Dimension Data was able to compete in a RfP process for a new customer by providing a customized offer after analyzing the customer’s data with process mining.

Do you want to build a process mining business case? Watch Jan’s talk here and be inspired!

———

Even if you can’t attend Process Mining Camp this year, you should sign up for the Camp mailing list to receive the presentations and video recordings afterwards.

Have you completed a successful process mining project in the past months that you are really proud of? Send it to us and take your chance to receive the Process Miner of the Year award at this year’s camp!

There are no comments for this article yet. Add yours!
Data Quality Problems in Process Mining and What To Do About Them — Part 12: Missing History 5

This is the twelfth article in our series on data quality problems for process mining. You can find an overview of all articles in the series here.

When you get a data set and assess the suitability of the data for process mining, you start by looking for the three elements: Case ID, activity name and timestamp.

For example, when you look for the case ID then you start looking at the candidate columns to see whether there are multiple rows in the data set that refer to the same ID (see image below). If you don’t have multiple rows with the same case ID, then most likely the field that you thought could be your case ID is just an event ID and does not help you to correlate the steps that belong to the same process instance1.

When you continue looking for the other fields, it sometimes seems as if you have all the fields that you need at first. But then you find out that you actually miss the history information in these fields. Read on to learn about four situations, where this can happen.

Missing Activity History

When you look for a field that can be your activity name, you may encounter a situation like shown in the picture below: The status is the same for each event in the case.

In this situation, you do have a column that tells you something about the process step, or the status, for each case. However, you don’t have the historical information about the status changes that happened over time. Often, such a field will contain the information about the current status (or the last activity that happened) for each case. However, this is not enough for process mining, where you do need the historical information on the activities.

How to fix:

If the activity name or status column never changes over the course of a case, then you cannot use the column as your activity name. You need to go back to the system administrator and ask them whether you can get the historical information on this field.

You can also look for other columns in your data set to see whether they contain information that does change over time (like an organizational unit, so that you can analyze the transfers of work between different units).

Note that even if you do have history information on your activities in the process, you may still be missing information on the activity repetitions.

Missing Timestamp History

The same can happen with the timestamp fields. At first, it might seem as if you had many different timestamp columns in your data set. But does any of them change over time for the same case? Or are they all the same like in the example below?

How to fix:

If your timestamp field never changes over the course of a case, then this is a data field but not a timestamp field as you would need for process mining. If you only have timestamp columns that never change, then you don’t have a timestamp column at all.

If your data is sorted in such a way that the evens are the right order, then you can still import the data set into Disco. Even without a timestamp, you can then still analyze the process flow and the variants (based on the sequence information in the imported data set), but you won’t be able to do a performance analysis.

Missing Resource and Attribute History

A similar situation can occur with other data fields, like a resource field or another data attribute. For example, in the data set below, the resource column does not change over the course of the case.

Instead of the person who performed a particular process step, the ‘Resource’ field above could indicate the employee that started the case, who is responsible for it, or the person that last performed a step in the process.

The same can happen with a data field, like the ‘Category’ attribute in the example above, where you might know that the field can change over time but in your data set you only see the last value of it.

How to fix:

If you can’t get the historical information on this field, request a data dictionary from the IT administrator to understand the meaning of the field, so that you can interpret it correctly.

Realize that you cannot perform process flow analyses with this attribute (for example, no social network analysis will be possible based on the resource field in the example above). You can still use these fields in your analysis as a case-level attribute.

Missing History for Derived Attributes

Finally, the missing history information on attributes might even be trickier to detect. For example, take a look at the data set below. We see that the registration of the step ‘Shipment via forwarding company’ in case C360 has been performed by a ‘Service Clerk’ role. However, for case C1254 the same step was performed by a ‘Service Manager’ role, which if we know the process might strike us as odd.

If we look deeper into the problem, then we find out that the ‘Role’ information was actually extracted from a separate database and linked to our history data set later on. However, the ‘Role’ information that was linked contains the roles of the employees today.

In 2011, when case C1254 was performed, Elvira Lores still was a ‘Service Clerk’. But by 2013, when case C360 was performed, Elvira had become a ‘Service Manager’. However, we can’t see that Elvira performed the step ‘Shipment via forwarding company’ back then in the role of a ‘Service Clerk’ because we only have her current role information!

How to fix:

As with the other examples above, there is typically not much that you can do about this in the short term. The most important part is that you are aware of this data limitation, so that you can interpret the results correctly.


  1. One exception is when your data is formatted in such a way that the activities are in columns rather than rows. Take a look at the following article to see what you can do in this situation: http://fluxicon.com/blog/2016/10/data-quality-problems-in-process-mining-and-what-to-do-about-them-part-10-missing-timestamps-for-activity-repetitions/.  
There are 5 comments for this article.
Say Hello to Rudi!

Today is a special day for us. We are very excited to introduce you to a new member of the Fluxicon team: Rudi Niks!

Here at Fluxicon, we have tried to stay as small as we can for as long as possible. We value the efficiency of having a small team, which makes it much easier for us to maintain our obsessive focus on quality and the close, personal contact with our customers. However, since our customer base has been growing so much lately, we started thinking about who would be a good fit to join the team.

We immediately thought of Rudi. In addition to his extensive experience with process mining and process improvement work, Rudi shares our values of honesty and quality. He is every bit as much of a process mining enthusiast as we are, and we are very happy that he agreed to join us! Together we will continue to build the best process mining software for professionals, and to support and grow the process mining community worldwide.

But we will let Rudi introduce himself to you in his own words:

My Journey of Becoming a Process Miner

13 years ago, I was one of the early adopters of process mining. I studied Business Information Systems at the Technical University in Eindhoven and we were introduced to this new technique of discovering processes from event data.

Process mining was still in its infancy and for many of my fellow master students it was a frustrating experience: First of all, good data sets for process mining were hard to come by. Secondly, the early versions of the academic process mining tool ProM had a particularly long learning curve. And thirdly, ProM was typically jamming just when you were about to begin your analysis! Christian — then a Ph.D. student in the process mining group — was our instructor and helping this first group of fledgling process miners on their way.

In 2011, I had an appointment as a management consultant at a major Dutch bank. We had made coffee and found a quiet spot. Frank introduced himself and the impact of digitization within the bank was soon the topic of conversation. He talked passionately about how these changes had a major impact on the processes of tomorrow. To survive this transition from a traditional bank to a digital bank Frank wanted to accelerate the change of the processes with … ‘Process Mining’.

I put down my coffee, and said: “You want to generate processes based on IT data? This is far to technical and scientific!” Frank laughed, opened his laptop, and gave me a brief demonstration of Disco. Amazed, I watched him, magically, creating a process map from his data set in seconds, and how easy it was to zoom in on all the variations.

A few days later, we were at the table with our first sponsor, the IT Service Manager. His ambition was to improve the services while lowering the costs. There were regular complaints that resolving incidents took too long. They were already working with continuous improvement methodologies, but it remained unclear what the route of an incident was through the different teams.

We analyzed the data and scheduled appointments with the teams that were responsible for handling different types of incidents. In earlier assignments I had realized that most of the improvement suggestions that came out of the workshops with the various departments were based on gut feeling. As a consequence, there was a lot of resistance to improve. In contrast, the process maps that we obtained with Disco told the story of what really happened with these incidents. The group delved deeper into the picture, uncovered the root cause for why these incidents were taking so long to resolve, and found other problems in the process that we had not even noticed. After 50 minutes, Frank and I walked out of our first meeting and we asked ourselves how many steps and time we can prevent in the other 500 types of incidents.

Over the coming years, I worked on many different process mining projects at different companies. In these projects, I saw first-hand how process mining empowers both the process improvement teams as well as the people who are responsible for these processes, and how fast you can move based on the new insights. Instead of six months like in a classical process improvement project, with process mining we typically succeeded to collect data, analyze it, and implement the improvements within 4 weeks.

Changing Professions

When I give masterclasses or share my experiences at conferences today, I still see the surprise in the faces of colleagues and managers once I show them how you can magically discover processes based on data with process mining. I see the enthusiasm and ease by which professionals start analyzing their own processes in Disco. And I am continuously surprised by the new applications that they find that I had not thought of myself.

I began to realize that digitalisation does not only change organizations but that it also changes us as professionals. In an increasingly digital world processes produce more data, making them more and more traceable. These digital processes are no longer hidden in the minds of people, but in the databases of information. And these digital processes are also changing faster and faster.

I believe that Process Mining is a game changer to extract real value from these digital processes. As a proud member of the Fluxicon team I am looking forward to working with and supporting all of you who are taking on these changes and new opportunities in our profession!

Welcome aboard, Rudi!

There are no comments for this article yet. Add yours!
Become the Process Miner of the Year 2017!

Process Miner Of The Year 2017

Last year, we introduced the Process Miner of the Year awards to help you showcase your best work and share it with the process mining community.

This year, we will continue the tradition and the best submission will receive the Process Miner of the Year award at this year’s Process Mining Camp, on 29 June in Eindhoven.

Have you completed a successful process mining project in the past months that you are really proud of? A project that went so well, or produced such amazing results, that you cannot stop telling anyone around you about it? You know, the one that propelled process mining to a whole new level in your organization? We are pretty sure that a lot of you are thinking of your favorite project right now, and that you can’t wait to share it.

What we are looking for

We want to highlight process mining initiatives that are inspiring, captivating, and interesting. Projects that demonstrate the power of process mining, and the transformative impact it can have on the way organizations go about their work and get things done.

There are a lot of ways in which a process mining project can tell an inspiring story. To name just a few:

Of course, maybe your favorite project is inspiring and amazing in ways that can’t be captured by the above examples. That’s perfectly fine! If you are convinced that you have done some great work, don’t hesitate: Write it up, and submit it, and take your chance to be the Process Miner of the Year 2017!

How to enter the contest

You can either send us an existing write-up of your project, or you can write about your project from scratch. It is probably better to start from a white page, since we are not looking for a white paper, but rather an inspiring story, in your own words.

In any case, you should download this Word document, which contains some more information on how to get started. You can use it either as a guide, or as a template for writing down your story.

When you are finished, send your submission to info@fluxicon.com no later than 31 May 2017.

We can’t wait to read about your amazing projects!

There are no comments for this article yet. Add yours!
How to Identify Rework in Your Process

[This article previously appeared in the Process Mining News — Sign up now to receive regular articles about the practical application of process mining.]

If you are involved with process improvement, then reducing rework is most likely one of your concerns.

Two common causes for rework are:

  1. The task was not done right the first time, so someone has to go and do it again.
  2. Information that would have been necessary to work on a case was missing, so it had to be sent back.

Rework is bad because it adds to the workload (and costs) of the company, because it delays the process completion time for the customer, and because — due to the extra effort — it often impacts the completion times for the following cases as well.

Process mining can help you to identify and pinpoint rework patterns in your process. By letting the process mining tool map out your actual process based on the IT data, you will be able to see where rework occurs, how often, and which process categories are affected by it.

Of course, once you have found where and how often rework occurs, you will still have to go and talk to the people in your process to find out why this is happening. But you will be armed with objective information and visual evidence that will be enormously useful to engage the people who are responsible for the process, and to focus the discussion on facts rather than keep arguing about opinions and gut feeling.

How come that people don’t know about the rework already? It is normal that not everything goes according to plan all the time. But because each of the employees handles just a few “exceptional” cases, what seems to be a small extra step here and there amounts to a lot of waste if you look at the complete picture. Sometimes, this effect is called “hidden factory”. With process mining, you will be able to provide an objective overview about the complete process and make the hidden process patterns visible.

For example, the improvement of a form at the website of a consumer electronics manufacturer reduced the amount of missing information in a refund service process, both reducing the amount of work on the side of the service provider (who had to retrieve the missing information) and reducing the process completion times for this critical consumer-facing process.

The solution does not always need to be technical either. For example, in a call center the increase of repeat calls typically indicates a quality problem: The people who had called earlier needed to call again because their problem was not solved in the first call. If the agents had been instructed to keep the call times as short as possible, this might seem to save money at first. However, in the long run it does more harm than good, because the customers are less happy and keep calling back. Shifting the measurement focus to ‘first time right’ can greatly enhance both the customer experience and the efficiency at the call center.

With process mining, you will have the process with all its problems right there, magically and objectively, at your fingertips. This is exactly what makes it possible for your team to focus on the why (and not the what) in the process analysis, which is one of the big benefits of process mining.

In this article, you will learn six tips for how to analyze rework with process mining (download the free demo version of the process mining software Disco here to follow along with the instructions).

Let’s get started!

1) Filter direct loops with the ‘Filter this path…’ shortcut

Rework manifests itself in different kinds of loop patterns in your process. Often, you will directly see the loop in your process map.

For example, take a look at this call center example below (one of the demo data sets you can download from the Disco website). The process map shows cases that are started by inbound calls, and you can see from the self-loop at the ‘Inbound Call’ activity that there are repeat calls for some cases. You can click on the images to see a larger version.

To focus on these repeat calls, you can click on the loop arrow and press the ‘Filter this path…’ button in the overview badge (see picture below).

You will be taken to a pre-configured Follower filter for this path and can simply press the ‘Apply filter’ button in the lower right corner. (Press the ‘Copy and filter’ button instead if you want to save your repeat call analysis for later.)

As a result, you can see the new process map for the repeat calls and you can see that 16% of all cases show this repeat call pattern.

2) Catch global repetitions with the ‘Eventually follows’ option

Now, this gives us the direct repetitions for the ‘Inbound Call’ activity. But wat about repeat calls that come in after some other activity happened in the process?

For example, there may have been an initial call from the customer. Then the agent called back (‘Outbound call’ activity) and then, some time later, the customer calls again (another ‘Inbound Call’). In this scenario, there has been a repeat call, but the two calls did not happen directly after each other. So, they are not reflected by that self-loop in the process map that we focused on in scenario 1) above.

What can you do if you still want to count this case that includes the pattern ‘Inbound Call’ -> ‘Outbound Call’ -> ‘Inbound Call’ in your repeat call analysis result?

That’s easy: You can simply go back to your Follower filter (click on the filter symbol in the lower left corner) and change the mode from ‘directly followed’ to ‘eventually followed’ (see below).

Your filter result now includes all cases that at any point in the process had a repetition (or more) of the activity ‘Inbound Call’.

3) Filter loops of the same type

So far, we have seen how you can filter cases where the same activity occurs multiple times. However, sometimes the rework patterns you want to analyze are more general.

For example, you may have combined multiple fields to unfold your activity name in additional dimensions (see the article Change in Perspective with Process Mining for three alternative view points that you can try for your own process).

The screenshot below shows how the ‘Operation’ and the ‘Agent Position’ columns were both configured as part of the activity name during the import step.

As a result, you can see a more fine-grained view of the call center process: The hand-over points between the first-level support staff (FL) and the backoffice employees (BL) are now explicitly represented in the process map (see below).

This is great, but if you want to filter for repeat calls as before, you now have multiple instances of the type ‘Inbound Call’ in your activity list (‘Inbound Call – FL’ and ‘Inbound Call – BL’). You could select multiple activities in the list, but you can also simply use the more general type attribute that you care about for filtering (see below).

Now, you can define your rework pattern directly on the ‘Operation’ field type ‘Inbound Call’ (see below).

The possibility to choose another attribute for your rework pattern in the Follower filter is also handy if you don’t want to focus on activity repetitions in the first place. Instead, you might be interested in, for example, cases that are reworked by the same person, the same department, or any other attribute dimension that you have included in your data set.

4) Filter for repetitions without knowing where they are

But what do you do if you don’t really know which loops you should be focusing on? Say, you want to filter all cases that have some rework in it, regardless of which activity was involved in the rework.

You can use the Follower filter to do that, too. Take a look at the Sandbox example that comes with Disco to see how:

First, click on the filter symbol in the lower left corner to add a filter.

Then, directly add a Follower filter from the list of filters (see below).

In the Follower filter settings, first select all activities as reference and follower values. This by itself will not yet have any effect, because if you match every activity pattern in the data set then all cases will be retained.

But now comes the trick: Below the reference and follower event list you can also add an additional constraint based on another attribute, which can be asked to have the same or a different value. Often, this is used to find violations of segregation of duties or analyze other compliance rules, but here we are using the ‘the same value’ option to filter repetitions of any kind.

Enable the checkbox Require and configure the settings so that it says the same value of Activity as shown below. Then click ‘Apply filter’.

The result of this filter are all cases that have some repetition somewhere in the process, no matter which activity has been repeated. We can see that in total 41% of the cases have some form of rework in this process.

5) Visualize repetition hotspots with the ‘Max repetitions’ option

To see where the biggest rework occurs, you can switch the process map view to ‘Max repetitions’ as shown below. The numbers in your process map will change to show the maximum number of times an activity was performed for a single case.

In the purchasing example, the activity ‘Amend Request for Quotation Requester’ really stands out as it has been repeated up to 12 times in the same case.

6) View detailed repetition statistics by focusing on a single activity

We now know that activity ‘Amend Request for Quotation Requester’ has been performed up to 12 times for the same case, but how many cases exactly repeated this activity so often? Just one? How many repetitions are most typical?

If you want to focus in on one specific repeating activity in more detail, you can do the following:

First, click on the activity you want to focus on and press the ‘Filter this activity…’ button in the overview badge (see below).

Then, in the Attribute filter change the mode from ‘Mandatory’ to ‘Keep selected’ as shown below (we only want to keep this one activity right now). Use the ‘Copy and filter’ button to save this rework analysis in your project.

After applying the filter, change from the Map view to the Statistics view and …

… change to the ‘Events per case’ statistics to see how many times this particular activity was performed for how many cases.

For example, now we can see that there was indeed just one case that performed activity Amend Request for Quotation Requester 12 times, and that there were three cases, where this activity was performed 6 times (see screenshot below).

For each of the scenarios above, you can then further analyze the context of your rework pattern by looking at further statistics. For example, you might want to see which process categories (regions, product types, etc.) are most affected. And you can inspect individual cases in the Cases View to get more context information and talk to the people who were involved in these cases to learn what the reason was and how they would improve it.

Which rework patterns can you find in your own process? If you need help, just get in touch and we will help you to get started!

There are no comments for this article yet. Add yours!
« Newer posts
Older posts »