Conferences are work. If you are a university researcher, you go to academic conferences because you need another paper published. In commercial conferences, the focus is for vendors to get the email addresses of every potential customer they can find. And these potential customers are wary not to get caught up in a long sales pitch, because they’d rather move on and see the rest of the show.
What makes conferences memorable, and ultimately valuable, is mostly what happens in the corridors during coffee breaks, or in pubs in the evenings. People let their hair down and talk shop, they whip out their laptops to run you through a quick demo of some alpha prototype they are building, and they cook up the next great idea to work on together.
Bruce Silver and Nathaniel Palmer had similar thoughts, and they came up with a plan. Their bpmNEXT conference focuses exclusively on live demos of truly innovative functionality, and on an intimate setting that brings people together to exchange ideas and thoughts informally. I have to admit that I was initially a little skeptical whether a conference like this could be pulled off.
Right from the start, it became clear that bpmNEXT was all that it promised to be, and then some. The conference was kicked off on Tuesday evening with an inspirational keynote from Paul Harmon where he charted the history of BPM and presented his analysis on whether it had “crossed the chasm” yet. After that, there was an informal gathering and drinks, and almost all attendees dove right into spirited discussions with their peers.
The next two days were filled with a brutal schedule of presentations. Each presenter had 20 minutes for their demo, followed by ten minutes of Q&A. We had the very first presentation slot on Wednesday morning, on process mining with Disco, so we fortunately could enjoy the rest of the presentations in a more relaxed atmosphere.
Image courtesy of Anatoly Belaychuk
Every presenter went above and beyond to show stuff that would be really interesting and new, and live demonstrations of new features and prototypes clearly dominated most presentations. No boring slides filled with bullet points and clip arts, but real software from the bleeding edge taking center stage. What a breath of fresh air. And the audience made sure to put the allotted Q&A time slots to good use, with questions frequently sparking off lively discussions among audience members and the presenters.
If you are interested in the specific topics presented, I recommend Sandy Kemsley’s, as usual excellent, coverage of all sessions on her blog. There are also a lot of other detailed reviews of bpmNEXT, a selection of which I have listed at the end of this article. And, to top it off, video recordings of each presentation are also scheduled to be released on the conference website in the coming weeks.
What I loved most more about bpmNEXT was the atmosphere and the people. We met a lot of old friends again, met for the first time with many people we previously only knew from their work online or from Twitter, and got to know many more interesting and friendly people. It seemed like everybody had secretly conspired to make this conference an inspiring, open, and welcoming place for all. At some conferences, once the official program is over, people split into their respective tribes, or a select in-crowd gathers at the cool kids’ table. No such thing at bpmNEXT. Everyone was open and available, and everybody was interesting.
Image courtesy of Heather Palmer
Right before closing the conference on Thursday afternoon, all attendees could vote their three favorite presentations for the Best in Show award. And like the cherry on this delicious bpmNEXT cake, the audience picked our presentation on process mining with Disco! I should note we won this award by a very tight margin, reflecting the overall high quality of all presentations. The second place went to Dominic Greenwood from Whitestein, and the third to Keith Swenson from Fujitsu.
Receiving this award is a great honor to us, especially from an audience that was as close to a “who is who” of thought leaders in the commercial BPM space as you can probably get. Like our Best Demo award from last year’s BPM Conference, it confirms the enthusiastic reactions we get from our customers, and it is a great motivation to push ahead with our work on Disco and process mining.
After the conference, we took a few days off to make the jet lag worthwhile, driving on the Pacific Coast Highway down to Los Angeles. We saw a lot of beautiful nature, and visited quite some sights, but most importantly we relaxed and recharged our proverbial batteries.
In summary, my take on bpmNEXT is a clear thumbs-up. This is not a scientific conference, where you can peek into the far future. It is also not really commercial, you will probably not find at lot of leads, or get a precise vendor roadmap. What it does, though, is fill a vital niche — it is a place where you can have a look in the kitchen, just before things are ready to be served. And, most importantly in my opinion, it is a place where you can meet friends, old and new, and cook up ideas and plans in a relaxed atmosphere. It appears that bpmNEXT 2014 is already a done deal, so if this ever so slightly scratches your itch, you may want to schedule this one in.
Update: The video recordings of all bpmNEXT talks are now available online.
A lot of other, more eloquent people have written their take on bpmNEXT before. Here’s a short (and incomplete) list of further articles you may find interesting:
One of the most interesting use cases for process mining is the understanding of legacy systems. In many cases the developers are long gone when changes to these systems must be made and it becomes a huge burden just to maintain these often mission-critical systems.
Steve Kilner just authored two articles on process mining for legacy systems in the IBM Systems Magazine:
Steve is an expert on AS/400 IBM systems and runs vLegaci, a company specializing in software management. I recommend to head over to the IBM Systems Magazine website, where you can read both articles online.
I also asked Steve to answer three questions here on this blog. You can read the interview below in this post.
Anne: Steve, why is the so-called greenfield development, where you make a fresh start, often not possible and people have to put up with all these old systems that nobody understands anymore?
Steve: Replacing legacy systems is costly, risky and disruptive to organizations. In typical legacy languages such as COBOL, applications may consist of a few million lines of code. A common estimate is that for every million lines of code in business applications there are about 30,000 business rules. How costly, risky and disruptive is it to redevelop tens of thousands of business rules? Whatever intelligence you can recover from your existing code is extremely valuable for either feeding the development of new systems, or identifying required functionality for purchasing off-the-shelf packages.
Anne: What does process mining add compared to traditional approaches such as static code analysis techniques?
Steve: Anyone who has been a programmer working with existing code knows that is impossible to look at a large program, let alone an entire system, and grasp everything that could happen within it. A subsystem with hundreds of conditional statements contains many millions of possible paths through the code. No one can fully comprehend all those possibilities. By creating or obtaining event logs of executing programs, possibly through program instrumentation if necessary, it is possible to observe the paths that are actually used, along with their frequency. By examining individual cases it is possible to then correlate data inputs with resulting path variances.
Best practice is surely to combine both static analysis and dynamic analysis, via process mining and other techniques. This provides deeper and more dimensional insight into system behavior.
Anne: How difficult is it to extract the data from a legacy system, how long does it take?
Steve: A simple starting point for most systems is to use database transaction logs. Most logs have some sort of session ID that can be the basis for cases. A step further is to extract key data identifiers from the transaction log, for example order number, customer number, etc., and use these as case IDs. This then expands the view of activities across sessions enabling you to understand how orders, customers, etc. behave. A further step is to engage in program instrumentation where you explicitly insert logging functions into the code in order to capture how programs are executing internally. I have used this recently for a client engaged in a modernization project where we are logging every call to every subroutine and screen input. This gives us an excellent view into a huge monolithic piece of legacy code.
Anne: Thank you, Steve!
After our interview with ProcessGold, we are now organizing two joint webinars next week on Thursday (one in English and one in German).
About the webinar
The webinar will be hosted by Tobias Rother from ProcessGold and myself.
First, Tobias will share their experience about what the requirements are to make a process mining project successful. Then, I will show you how easy it is to visually discover business processes with our Process Mining software Disco.
Sign up for the webinar by clicking on one of the following links:
If you want to know more about the success factors of process mining and see a Disco demo, don’t hesitate and sign up now!
In the context of their Process Mining Leader Interview series, ProcessGold spoke with Christian about our strategy, our product’s unique value proposition, and about our vision on process mining. You can read the full interview below in this post.
Christian, can you please introduce yourself?
Sure thing. We are Anne and Christian, and we met while studying software engineering at HPI Potsdam, Germany, where we first encountered process mining in 2002. We both went on to pursue our PhDs in the process mining group of Wil van der Aalst at the TU Eindhoven in the Netherlands. Our passion is to tell the world about process mining and its benefits, and to create well-designed, scientifically accurate, and high performance process mining software. This is why we founded Fluxicon in 2009.
What is your vision on Process Mining?
We think that process mining has the potential to become an indispensable, universal tool for everyone who is responsible for analyzing or managing a process. There are obvious target groups like BPM or Process Improvement professionals, but we are convinced that this is just the tip of the iceberg. There are many more people who may not self-identify with BPM, but who can really benefit from a powerful process mining tool.
Our goal is to make process mining into a ubiquitous tool, much like MS Excel, SPSS, or Adobe Photoshop, that will be used to improve processes in a wide variety of settings. The greatest challenges for that to happen are design and usability of the software, scientific rigor, and the ability to answer the important questions about a process. And we are ready and uniquely positioned to take on these challenges in the next years.
What is your product’s unique value proposition?
What our customers like most about Disco is that it just works. It is scientifically engineered to provide correct and meaningful results. It is fast enough to blaze through even large data sets in no time. We care a lot about quality, so bugs and crashes are very rare, and we fix them in days not months. Most importantly, we design our software from an analyst’s point of view, leveraging our own extensive consulting experience. Disco makes answering your process questions effortless and is truly enjoyable to use.
Last but not least, our product is designed, built, and supported by us. We truly care about our customers and about process mining. At Fluxicon we focus exclusively on process mining, we are probably the most experienced experts on this topic not working at a university right now, and we are pouring all our heart and soul into this company. So, if you are passionate about doing great work, you are in good company with us!
What has your track record in Process Mining been so far?
In our past academic life, we have both established ourselves as experts in the field of process mining. Anne has extended the field beyond discovery to also include compliance and extension techniques. I have pioneered the application of process mining to less-structured processes and have laid the foundations for process mining software as lead architect of the open source ProM software kit.
Since founding Fluxicon, we have helped dozens of customers to understand their processes and improve them, and we have helped hundreds of people getting started with process mining. We have put a lot of work in building the community through our blog and Twitter stream, by managing the LinkedIn group on process mining, and by organizing Process Mining Camp 2012, the first community meeting exclusively focused on process mining. Another topic near and dear to our heart is to support academic research and education in process mining. This is why we have founded our Academic Initiative, which provides academic institutions with free licenses to our software, and with teaching materials on process mining.
After releasing our first software Nitro, we have followed up in 2012 with the release of our first complete process mining solution, Disco. The community has responded enthusiastically to Disco, and we have more users joining every day, which has been phenomenal. Disco embodies our vision and expertise around process mining, and you should really check it out (it’s a free demo download at www.fluxicon.com/disco/) to see for yourself why our customers are loving it.
What has been your most exciting success in the past three years?
That’s a tricky question to answer. Usually, we are always most excited about our most recent work, like when we have helped a client succeed. If we think about all that has happened over the past three years, we would pick the Process Mining Camp and Disco.
We really love how our first Process Mining Camp in 2012 turned out. All of our speakers gave excellent talks from their hands-on experience, and they engaged the audience in lively discussions. The greatest thing, however, was having more than 70 process mining professionals in one place, for the first time, and to see them exchanging experiences, tips, and simply getting together as a community.
The most memorable and resounding experience for us was probably the release of Disco, and specifically the community’s reaction to it. We spent a lot of time and effort to make sure that Disco would turn beginners into experts fast, and enable them to solve complicated problems easily. Most of all, we wanted to create a tool that people would actually love to use, and the reaction from our customers has been just awesome. At the BPM 2012 conference, we even won a best tool award. We are really proud with the way Disco has turned out for us, and we just love the feedback from our customers and Disco users. They’re simply the best, and they are what makes working at Fluxicon so much fun.
What is the single biggest opportunity ahead for your company?
We have made thousands of people enthusiastic about process mining through our articles, invited talks, and educational activities already. But in the grand scheme of things still very few people know about process mining. If you take data mining, for example, everybody knows about data mining, but it has been around for decades. Process mining as a discipline is much younger.
We want to bring process mining to a larger audience, because we are convinced that there are millions of people out there who could benefit tremendously from using it. By teaming up with other people in the field, and by working as a community, we have the opportunity to spread the word about process mining while preventing it from becoming a marketing fad. Eventually, all data scientists, process analysts, and process managers will know about process mining and use it where appropriate.
Another key factor to achieve wide-spread adoption is that process mining needs to be very easy to do. People are not interested in becoming process mining experts, or learning to operate complicated software, they are interested first and foremost in results and insight. With Disco, we have created a software that is much easier to use, and at the same time much more powerful, than other solutions. It’s amazing to see how quickly people are getting started with Disco and how little support they need on their way. We are working hard to make Disco even easier to use and to expand its functionality without compromising on usability. We think this is a huge opportunity for our customers, old and new.
What are your key elements in your strategy for the next 2-3 years?
Our strategy has remained relatively stable over the last three years, and we don’t expect it to change anytime soon.
We want to be a great company to work with, for both our customers and our partners. For us, consulting is not about PowerPoint, and customer support is not a cost center but an opportunity to learn. We truly care, and our customers and partners appreciate that.
We want to create great software. Our products are scientifically sound, and we take great pride in quality engineering. Most importantly, we design from the vantage point of the user, which is why usability and aesthetics is not an afterthought for us. Rather, it naturally flows from the user orientation in our software design process.
We want to tell the world about process mining. For us, this is less of a marketing effort and more about evangelism. We are in this for the long haul, so our goal is not to create some hype, but to organically grow and support the community.
In our opinion, we are the only process mining company that can deliver on this strategy. What separates us from most competitors is that for us, process mining is not a tacked-on marketing gimmick, and it is not a nine-to-five job either. At Fluxicon we focus exclusively on process mining, we know what we are doing, and we are loving it. And we intend to keep it that way.
Sometimes people are amused when I pull out my laptop and I am ready to go in about as much time as it took me to say “Do you want to see a demo?”. I really like giving demos to explain process mining. What’s a better way to illustrate a new idea or technology than actually showing it at work?
That’s why we are so excited about the first edition of the bpmNEXT conference, which focuses on next-generation BPM technology and takes place in California in March: Where other conferences often forbid or discourage live demos, bpmNEXT puts them front and center.
Sandy Kemsley describes bpmNEXT as follows:
The concept is to see and discuss BPM innovation, not the same old stuff that we see at so many BPM vendor and analyst conferences every year, and to do it through demonstrations of new technology and ideas as well as presentations. [...] We need a place where people involved in creating the next generation of BPM software can get together and collaborate, even if they’re competitors outside the conference. bpmNEXT has the potential to become that place.
We are thrilled about the opportunity to show process mining and Disco right at the beginning of the first conference day.
But the whole conference agenda is full of intriguing topics, presented by innovators and thought leaders who we can’t wait to meet. The setting of a demo-centered and forward-thinking BPM retreat surely promises lots of inspiring discussions.
If you want to join, you can still get early bird pricing until February 19 (see registration page). Christian and I would love to see you there!
In his recent Software & Systems Modeling article ‘What makes a good process model?’, Wil van der Aalst uses lessons learned from process mining to identify seven problems in process modelling.
The first problem he mentions is Aiming for one model that suits all purposes. We process mining folks know that you can take multiple views on the same process and Wil draws a parallel to maps, which are created on different levels of abstraction and for different purposes.
I recently came across the work of Denis Wood, a geographer and artist who has taken the creation of maps for different purposes to the extreme.
In 1974, he began teaching environmental perception at North Carolina State University in Raleigh, USA, and together with his students he ended up mapping out Raleigh’s neighborhood Boylan Heights (see map above) in many different – and pretty amazing – ways.
When you look at the different maps, all reflecting Boylan Heights but focusing on unusual aspects such as sound, street lights and Halloween pumpkins, you start to get a sense of that neighbourhood. The shape of the underlying geographic map starts looking familiar and a story puts itself together from the individual pieces.
All maps are by Denis Wood and at the bottom of this post you find further links to articles about Denis and to his book. Enjoy.
Streets and Traffic
The Car Space map shows you the curved-shaped streets of the neighborhood but also all the little alleys and less formal car spaces such as the driveways.
The Traffic across these streets is very unevenly distributed.
And this map shows just the Traffic Signs (the actual signs) at their respective positions, nothing else. As you can see, most of the signs are at places, where people just pass through.
The Underground map shows the gas, water, and sewer systems beneath Boylan Heights.
The Lines Overhead map, in turn, has the overhead cables such as for electricity or telephone mapped out.
The Jack-o-langern map is a map of pumpkins at Halloween, whereas just the face of the pumpkin is projected on the map itself.
The Mentions in the Newsletter map provides another view on the social geography of the neighborhood. You may notice that the areas with the most frequent mentions match up nicely with the pumpkin map.
In the Police calls map you see codes for the different types of calls that were made to 911 over a six month’s period.
The size of the number denotes the frequency at that location of that type of call. For example, a 17 is a motor vehicle accident and a 16 is a vehicle blocking the flow of traffic. There are a lot of 16s and 17s at one particular intersection (compare that with the traffic map).
Furthermore, numerous calls reporting disturbances were made from all over Boylan Heights. According to Denis Wood, these are quite high numbers, given the low crime rate, and show a reluctance to knock on the neighbors’ doors to complain.
A part of Absentee Landlords visualizes the rent money paid to owners living elsewhere in the city. The length of the line shows the distance the rent traveled, the thickness the number of properties owned.
The Fences map shows the type of fence at each house in the neighborhood. It’s an open neighborhood with few front yards fenced. The mansion with the highest newsletter mentions seems the have the biggest fence.
The Graffiti map records the location and content of each graffiti on the side walk (made in wet concrete).
For the Stars map they lay down in the middle of the intersection of Boylan Avenue and Dupont Circle and looked up the sky to map the stars as they spread themselves over the neighborhood. They used a magnetic compass and later improved their plotting using an atlas of star positions.
The Street Light map is a map of the locations of the street lights, whereas – just like the pumpkins – not an abstract position marker but the actual street light is depicted.
With the Light at Night map they went even further and not just recorded the position of the light sources but measured the actual light levels at night with a light meter. Denis then made a light contour map from it for one block on Cutler Street (see below). Straight lines are house facades, and the small rings are porch lights.
The Mailman map shows the delivery route of the mailman.
The Bus Ballet map shows six bus routes passing through the neighborhood between 3:00 and 3:20 on a weekday afternoon, with the time rising vertically.
The Shotgun, Bungalow, Mission does not show a 3D view of the houses but is a representation of how close each house is to the traditional style of a white, wooden one-story house on a red brick foundation with a front porch and wooden railing. The taller on the map, the more typical the house is.
Similarly, the Porch Ceiling Colors shows that in 1982 most porch ceilings were white. When your ceiling wasn’t white, people noticed. To get a sense of how houses look like today, you can wander around a little bit with Google street view.
In the Footprints map the neighborhood is shown in the larger context of the city. It’s an ichnographic city plan, which is a ground plan of all the buildings. One can nicely see that Boylan Heights consist of mostly one-family houses and that there is much open space in Raleigh.
The Boylan’s Hill map shows a landform relief representation of the hight levels. Depending from which side you approach the neighborhood, you get a different idea of “how much on a hill” Boylan Heights really sits.
The Views from street level map shows what of the rest of the city can be seen from any location in Boylan Heights. Views northward are blocked by nearby buildings and trees, except along the sightline opened by the railroad cut; views southward are obstructed by Dix Hill.
The Study for Viewsheds shows the basis of the map above. To make the map, Wood stood at each intersection and mapped what he could see onto sheets of tracing paper laid over a USGS topographic map.
Barking Dogs were recorded while following the route of the mailman (see above).
The Wind Chimes map visualizes the sounds of the wind in the neighborhood. Denis Wood writes:
When we did the house types survey, we also paid attention to the presence of wind chimes. They were all over—bamboo, glass, shell, metal tubes. Depending on where you stood, the force of the wind, and the time of day, you could hear several chiming, turning the neighborhood into a carillon.
The Radio Waves shows the radio wave fronts passing through the neighborhood, silently, unfelt, and unnoticed, unless tuned into.
If you want to read more about Denis Wood and his maps, here are a few links to start:
The second edition of Denis Wood’s book Everything Sings: Maps for a Narrative Atlas will be published in May 2013. You can pre-order here (not an affiliate link).
I don’t know how you feel about it, but I got the sense that I know the Boylan Heights a little bit by now.
Which map do you like best? Let us know in the comments.
2012 has been a great year for process mining and Fluxicon. We love what we do and it has been a privilege to work with so many smart and visionary people.
We would like to end the year by thanking all of you who have supported us and spread the word about process mining.
First of all, we thank our customers, who have placed their trust in us and have been amazing to work with. The Rabobank, the City of Lausanne, MLP, and Deloitte Belgium are Disco customers who gave us permission to mention them. Other customers have chosen to remain private. They have all been fantastic.
We also thank our partners, who have been relentless champions of process mining. We have not talked a lot about our partners yet, but you will hear much more about them in the future.
A big thanks goes out to our advisors, who have supported us with their wisdom and hands-on support. We are also very thankful to the organizers of conferences and workshops who invited us to talk about process mining. A special thanks goes to Sandy Kemsley, who heroically jumped in to give our BBC presentation, when we couldn’t get there due to the storm.
Finally, we thank all Disco users, our academic partners, the speakers and process mining campers, all blog readers, tweeters, and everyone who offered their feedback, comments, and support to us in such a kind way.
From hundreds of conversations we have gotten a pretty good picture of the kind of person who is interested in process mining. I am not sure whether all process miners are like that or whether it’s just the people that we talked with, but this is what we have seen: Process mining folks are pragmatic doers and movers. They are enthusiastic, curious, and really good at what they do. They care about things that go wrong in their organizations and want to fix it. They see the big picture. They are change agents and team players. They are thorough and rigorous. They appreciate beautiful software and are inspired by what process mining can do for them and their companies.
Everything we did was fun because you were such nice and pleasant company on the road. So, we are looking forward to an even greater process mining year with old and new friends in 2013!
A little more than 200 days ago, on 30 May 2012, we released version 1.0 of Disco. In these 28 weeks, we have released 30 updates to Disco, which means there has been more than one update per week on average! However, over all that releasing I have forgotten about an old tradition here at Fluxicon: Sharing information about updates with you here on our blog.
Today we are proud to announce the release of Disco 1.3.0. This update fixes a number of bugs and adds new features, but first and foremost it once again greatly improves the performance of Disco. You can download an installer package for Windows or Mac OS X from www.fluxicon.com/disco
Disco is also automatically checking for software updates, and downloading them in the background. When you are running Disco and it shows you the blue update banner above, you can take a break to install the update, and then start using the latest version right away. Since Disco keeps your complete project data and settings persistent, you can continue right where you left off. If you prefer to keep working with your current version for now, Disco will install the update the next time you start it up.
Keep reading to learn about some of the most important changes in Disco 1.3.0.
When we ask our users what they like most about Disco, the most frequent response is that it’s so easy and fun to accomplish very complex and impressive analysis results with Disco. A very close second response is the fact that Disco is breezing fast, even with monumentally big data sets.
In fact, we consider performance and speed an integral part of usability. If you have to wait for your computer between every step, you will begin to hesitate before trying something out, and it will ultimately make you much less productive. This is why it has been our priority for Version 1.3.0 to make Disco, once again, even faster.
Disco now contains even smarter algorithms and data structures, and it is now even more optimized to use all available CPU cores and memory on your system, to ensure that you get the best experience available.
With Disco 1.3.0, it is now more than 1.5x faster to import an event log from a CSV file. You will notice this speedup especially when you are loading a large and complex event log.
Disco is the only commercial process mining software with a custom-designed event log database, our Octane log management layer. This allows us to optimize precisely for the performance profile required for process mining, and to go way beyond what a general-purpose database is capable of.
For example, in Version 1.3.0, Disco now switches seamlessly between four types of log storage containers, which are each optimized for different machine capabilities and log characteristics. And the best part is, you don’t have to know or care about it to get the best performance — it just works.
Filtering your datasets is an essential part of the Disco experience. Since you can quickly create copies of a dataset, and filtering is so easy and fast, Disco makes it easy to quickly drill down into your data and answer your analysis questions.
With Disco 1.3.0, filtering your log is now more than 2.7x faster than with our previous version 1.2.19. Note that we always measure the time from start to finish, from clicking the “filter” button until you see your complete analysis results, because that’s what you really care about when using it.
When you start Disco, everything in your project is exactly how you left it the last time. Even if you export your project and give it to a colleague, he will find it just how you left it. This is an important feature for us, and so we’re happy to tell you that, with Disco 1.3.0, your project will now be restored more than 15x faster.
Sometimes when you’re working with a dataset in Disco, you want to quickly review the filter settings. With large and complex datasets, though, showing the filter dialog could sometimes take longer than a minute. In Disco 1.3.0, we made the filter dialog more than 50x faster, so that it should now always appear instantly.
These are just some examples of the performance boost that Disco 1.3.0 brings. When you start using Disco after this update, you will notice that it has become much faster all across the board, especially for big data aficionados. As a bonus, we have made progress feedback much more accurate all over Disco. So, in the rare cases where you will still have to wait for Disco, you now get a much better idea of how long it will take.
It always makes us happy to see that the user base for Disco is very international. For us it is important that Disco works around the globe, which is first and foremost manifested in Disco’s comprehensive Unicode support for event log data.
In Disco 1.3.0, we have finally fixed the last place where we had a problem with Unicode data. The map view pop-over now correctly displays path relationships between activities with right-to-left language names. This means, data with Hebrew and Arabic activity names now finally get the same full support in Disco as all other languages.
The next one may be a small feature, but quite a number of people have been asking for it. In the project view in Disco 1.3.0, you can now re-arrange the list of your datasets using drag and drop:
Simply click on a dataset in the list and keep your mouse button down. Move the dataset to your desired location, and then let the mouse button go — presto!
Various other changes
Here is a short list of some smaller changes that are included in Disco 1.3.0:
- Improved launcher for the Windows platform.
- When importing from CSV, you can now set a timestamp pattern even if the sample contains no data.
- Improved in-app help and onboarding experience.
- Various bug and UI fixes.
- Includes updates to various support libraries.
We believe that Disco is the best process mining software available today. It is easy to use, fast, and made by experts who have researched process mining and developed process mining software for more than eight years now. We work hard every day to keep things that way, and we think that this update extends Disco’s lead significantly.
Honestly, though, we don’t care much for how great Disco is when compared to other solutions. What is important to us is that Disco is the perfect tool for you, our customers and users. Your feedback, bug reports, and kind words make developing Disco a real joy. We would like to say a big thank you to everybody who has placed their trust (and their money) in Disco and in us — it has been a blast!
Download Disco 1.3.0 today, and let us know what you think about it in the comments!
Based on the feedback that we get from our Disco users, they really like the variant analysis functionality and see this as one of the most useful tools for their process analysis. For example, an IT Process manager of a telecommunications company recently said:
Using variants we can learn which routes are better to deal with our incidents!
So, we thought it’s about time to do a blog post on how you can analyze the variants in your process using Disco.
Watch the screencast above to get a quick tour and read on to get a detailed description of the variant analysis possibilities in Disco. You can also follow the steps yourself using the free demo version of Disco.
Why variants are important
Variants show us how much we tend to underestimate the complexity in our processes: If you ask a process owner how many variants she thinks there are in her process, a typical answer is “10″ or “15″. Meanwhile, the actual number of variants is often close to 100. We have seen literally millions of variants for some processes.
Furthermore, variants provide us with a simple, sequential view on the process execution. For example, in the discovered process map above you get a bird’s eye view on the analyzed purchasing process and you can see that there is a very dominant loop involving a process step that makes changes to the original purchase requisition (see activity Amend Request for Quotation Requester).
However, we can’t see how a single case moves through the process, or how many cases go through this extra loop once, twice, or even more often. To get a grip on a typical process execution pattern from start to end, we need to look at the variants.
Variants are important because:
- The way we think: People think in scenarios. It’s much easier for us to understand a sequence of steps when we think about typical process execution patterns.
- Mainstream vs. exceptions: The frequency of a variant tells you how frequent a specific execution pattern is and lets you distinguish main stream variants from outliers and exceptions.
- Variation in your process: The total number of variants alone tells you how much variation you have in your process. Following standard procedures is crucial to deliver constant quality and efficient services.
- Cleanup: They help you spot data quality problems and let you see if you have still incomplete cases that first need to be filtered out before proceeding with your analysis.
By understanding the variants in your process, you can find out which patterns deliver a good (or bad) performance. You can then actively promote the well-performing variants for a better and more consistent process performance.
What exactly is a process variant?
Process variants are about variation in the process flow:
A process variant is a unique path from the very beginning to the very end of the process.
In other words, a process variant is a specific activity sequence, like a “trace” through the process, from start to end. For our presentations and courses we created a simple illustration of process mining that also helps to understand what a variant is (see below).
You can read this picture as follows:
Process variants are in between the raw data and the process map. In the illustration above we have three different activity sequences for the three cases. But we would see repetitions of these activity sequences if we would look at more cases.
So, a process variant is a particular activity sequence, which can be followed by just one or by many cases. Precisely this frequency, that is, the number of cases that follow a particular variant, is very interesting because it shows us what the frequent patterns are in our process.
Inspecting process variants
Here is how you can analyze your process variants in Disco.
Once you have imported your data, you see a process map that provides you with a bird’s eye view on your process as shown above. From there, you can change to the ‘Cases’ tab (see below).
In the ‘Cases’ tab you get a complete list with the history of all the cases in your data set (Complete log in the upper left corner). But in addition, you also get a list of variants. In this example, you can also see is that there are 98 variants in total for just 608 cases.
The variants are sorted by their frequency. Variant 1 is the most frequent variant: With 88 cases it accounts for almost 15% of all cases in the data set, Variant 2 covers 77 cases (about 13%), etc. In many processes, inspecting the top 5-10 variants helps you understand up to 80% or 90% of the whole process.
In the screenshot above, Variant 3 has been selected and you can see that you get a list of the 63 cases that follow this variant. So, all 63 cases (case 538 is selected at the moment) follow the same process pattern: They start with activity Create Purchase Requisition, perform activity Analyze Purchase Requisition as a second process step, and then they stop: The third most frequent pattern is related to stopped purchase requests! Perhaps the purchasing guidelines need to be updated to avoid the waste of processing these additional requests in the first place.
By looking at the variants you can understand how much variation there is and how your most frequent process patterns look like. You can also look at the less frequent variants and find out why they did not follow the standard procedure.
When you change to the ‘Statistics’ tab (see above), then you can inspect the average case duration for each variant to see which variants tend to be slow and which are fast. The table also provides an overview about the number of steps that were performed (an indication of effort) and how many cases are covered by the variant (indicates impact).
You can export this overview (like any table in Disco) by performing a right-click and pressing the button Export to CSV…
Filtering based on process variants
You can also use the variants to focus your analysis on either the mainstream behavior (or precisely the exceptional cases) using the Variation filter.
In the screenshot above, we have focused on the five most frequent variants by limiting the filter to variants that contain at least 32 cases or more. As we can see, this covers just about 5% of our variants but 50% of all cases in the process.
When you apply the filter, then you get a filtered view on how the process looks like for just these five variants (see above).
We could now further analyze the throughput times, bottlenecks, and data attributes for these mainstream process variants and compare them to the performance of the more exceptional cases to gain a deeper understanding of how we can improve our process.
The essence of the variant analysis is that by understanding the mainstream variants, you can improve and enhance the normal process. By understanding the exceptional variants, you can reduce your variation and deliver a more consistent performance.
Have you used process variant analysis yourself? What have you learned about your process? Let us know in the comments!
We are excited to present a new case study for the application of process mining. The use case is very interesting: Different Procure-to-Pay processes in different countries were analyzed to understand local best practices and align them with the globally advised process.
You can read the case study in this blog post or download the PDF version at the bottom of the article.
AkzoNobel is the largest global paint and coatings company and a major producer of specialty chemicals. Headquartered in Amsterdam, the Netherlands, and with operations in more than 80 countries, their 55,000 people around the world supply industries and consumers worldwide with innovative products and are passionate about developing sustainable answers for their customers. The Process Mining project took place at the AkzoNobel Decorative Paints division. Their portfolio includes well-known brands such as Dulux and Sikkens.
AkzoNobel Decorative Paints (ANDP) has globally implemented SAP. Also their Procure-to-Pay processes are standardized in SAP. Over the years, countries have adopted their local ways of working within the standardized processes to reflect the local best practices.
To improve the efficiency and compliancy & control of the organization, a ‘Value Extraction’ program has been initiated.
The reasons for ANDP to seek a detailed understanding of the different local Procure-to-Pay processes were twofold:
- Are there deviations from the globally advised process that are not desirable from a compliance & control standpoint?
- Are there opportunities to learn from the local best practices in order to adopt a corporate best practice that harmonizes the processes where possible (efficiency standpoint)?
Capgemini is a global leader in consulting, technology, outsourcing, and local professional services. As a trusted partner of ANDP, Capgemini set out to help the Procure-to-Pay managers to get a detailed understanding of the Procure-to-Pay processes in 16 different countries.
Their domain expertise and professional excellence was supplemented by the application of a new technology called Process Mining that made it possible to quickly and objectively get a detailed picture of the local processes at hand.
The advantage of process mining is that visualizations of the actual processes can be automatically generated based on existing IT data. As a consequence even 16 local processes could be analyzed in a very short time frame without the need to hold local workshops and process mapping sessions in all these countries. Furthermore, the generated process models are accurate and complete (all variations) because they are directly derived from the data records in the operational SAP system.
For the process mining analysis, Capgemini used Fluxicon’s process mining software Disco.
The following steps were taken to perform the analysis:
- The logging information was extracted from ANDP’s SAP systems to be able to analyze the actual events that make up the process (such as ‘Create purchase order’ or ‘Approve invoice’) for up to 10,000 purchase order lines per country.
- Using Fluxicon’s process mining software Disco, these events were then analyzed by Capgemini and visualizations of the actual process flows were automatically derived to understand the local processes (see screenshot below). Furthermore, the generated process flows were compared with the expected flow to find exceptions from the norm (compliance).
- These analyses were performed for 16 different countries and the processes and resulting exceptions were investigated to understand the root causes and best practices behind the local deviations.
The analysis revealed what is really happening in the different local Procure-to-Pay processes.
The process manager could inspect and compare the actual process flows to get actionable insights on how to improve the process:
The biggest benefit of process mining is the great insight that you obtain for process improvement.
Elise Carre, Procurement Excellence Manager – Deco EMEA at AkzoNobel
Specifically, the following results were achieved:
- Management obtained insights into exceptions where the ‘First time right’ principle was not realized.
- Peer comparisons between countries helped to identify best practices that can be adopted on the corporate level.
- The direct insights in process improvements enabled the desired ‘value extraction’ from the P2P processes.
- Compliance control was realized to execute on corporate guidelines that must be followed.
A big advantage was also the simplicity and speed with which the processes could be extracted from the data.
I was truly impressed with the flexibility and ease of use of the process mining software from Fluxicon.
Martijn Arkesteijn, Information PMO Manager EMEA at Akzo Nobel Decorative Coatings
You can download the case study as a PDF document by clicking on the document preview on the left.
Are you curious how your own process can be visualized in Disco?
Download a free demo version of the software to test process mining based on your own data. Or contact us to discuss your questions and specific needs.