This is Flux Capacitor, the company weblog of Fluxicon.
You can find more articles here.

You should follow us on Twitter here.

How process mining helped to replace a legacy system 5

Most of us process mining enthusiasts have some idea of how much potential discovery techniques must have in real-life situations. The magic of real process flows appearing from some kind of log data that was recorded for completely different purposes just is revealing.
But seeing other people successfully apply these techniques is even more amazing.

David Truffet, now working with YAWL consulting, impressed me with his story about how he and his colleagues had used ProM to replace a legacy system that nobody understood anymore.

But the best part is that he allowed me to share the conversation we had on 12 September here with you in this post. Thank you, David!

Legacy meters

Anne: Can you describe in which context you have used process mining and how it helped you?

David: I’ve used ProM while working for a large local government authority.

They were in the process of replacing a number of in-house legacy systems, with a new workflow management system.
One of the problems faced was that their existing processes/system was not well documented and the development team who built it had left a number of years earlier.

Myself, a Technical Writer (Geoff Purchase), and two application developers were tasked with documenting 4 out of 16 processes, and their interactions with the 3 largest back office systems, to about 80% accuracy at a given level of detail.
The two application developers, even though they had a couple of years experience adding to the system, found it extremely difficult to follow the code, and estimated many weeks of work, with little confidence of accuracy.

Anne: How were you personally involved, and how did you and your colleagues proceed?

Myself, I identified 4 different production logs to which the application wrote information for the preceding 12 months. These logs included both user interactions and SOAP API calls to other systems, and I wrote a few scripts to combine them and produce simple MXML files. [Comment: MXML is an event log format that can be analyzed with ProM]

By week two, I had about 31 process flows covering all 16 business processes.
We then took 3 of the more complex flows and requested the system testers to run through their test scripts in system test while the technical writer traced their actions through the printed out process flows from production logs.

We then re-ran ProM over the system test logs and confirmed expected diagrams produced.

On the other end, we gave the same process flow diagrams to the application developers, and they spent 3 days doing a code review for a single flow. The developers’ feedback was that the mined process diagrams matched the application code with a surprising degree of accuracy, the process flows helped them understand how the actual application worked, whereas before they simply got lost in the code.

Once this was done, the Technical Writer proceeded to document the rest of the processes and the application developers validated the remaining flows via code reviews.

Anne: How would you summarize the outcome of the project?

Original scope was to document 4 out of 16 processes to about 80% accuracy, Instead we documented 16 out of 16, with the Technical Writer (Geoff) estimating a 98% accuracy at the required level of detail, in about 1/2 of the estimated time for documenting 4 of the processes.

I found ProM, and a good set of logs, made my work significantly easier, faster and done with a higher degree of certainty.

Anne: Wow, this is an impressive story. Thank you so much for sharing. A last question: Now that you work as a YAWL consultant, do you think that mining YAWL logs is still interesting, although the processes in this setting are much better understood?

David: Mining processes from the YAWL engine is in itself not that interesting, as you say, the process (in the static) sense is ‘well’ understood.

But from a business process performance (BAM) and process improvement perspective it can be very important to find where the bottlenecks in the process are, as this can save a business significant amounts of real money over time.

It’s also nice that ProM can export a mined process to a YAWL workflow model, allowing to export a workflow specification that can be run on a workflow engine.

Of course there would be some work in customising the look and feel and system integration but for an organisation looking for a replacement of their legacy systems, it might be quite useful to have a mock-up of their process running on a workflow engine.

I basically see process mining and YAWL solving very different problems but that their solutions are simply different pieces of the same jigsaw puzzle and could be very complementary to each other.

Comments (5)

i don’t believe it

Very interesting post Anne, thank you!!!

@Mark: why don’t you believe it? It sounds possible to me!

I don’t know if David Truffet will be reading this but I have a question: did you know of process mining and/or ProM before you started the project or did you encounter it while searching for a better way to discover the legacy system’s process?

Joos

Hi Joos,

In answer to your question Joos, it was probably more of the latter “searching for a better way to discover the legacy system’s process”.

The project(over a year ago now) was my first opportunity to test ProM’s capabilities.

Hi Mark,

The application logged each users’ sessions on the website and the pages they visited as well as the business process that the users were progressing through.

A second log contained every SOAP API call made by the system to external systems, as well as user id and web session id and time-stamp.

These two logs contained sufficient information for what we were required to document, while another log contained ‘milestone’ events within the process such as ‘PAYMENT (complete)’ again tied back to the users web session id.

Although the logs contained every web page post parameter, and every SOAP request/response pair, we were not required to drill down to that level of detail.

Scripts were written to combined the logs into basic MXML files. I then analyzed these in ProM via the “DWS mining plugin”.

One of the reasons for choosing this plugin is its ability to split a large complex process into a number of smaller and simpler special cases.

Once we were happy with the process graphs being produced for one process, I simply ran the scripts to extract each of the remaining business processes to its own MXML.

And showed both the technical writer (Geoff) and the developers how to interpret these and view them via ProM.

This gave them access to a graphical representation of all 16 processes far sooner than otherwise possible.

The effect for myself was that it was little or no more effort to analysis all 16 processes as it was to analysis 1.

While for the technical writer and developers once they understood how to understand the first process diagram, the remaining 15 had very similar layout and shared many of the same steps and they were able to progress through the latter processes at a faster rate, without being dependent on the other.

By removing the technical writer’s dependency on the developers to first having to complete their review of each this allowed him to finish documenting the processes prior to the code reviews being completed.

[…] Specifications for Automated […]

Hello Spencer,

Thanks for your feedback, simplest way to contact me is to visit http://www.yawlconsulting.com/contact_us.html or email: (my first name) at yawl consulting dot com

David.


Leave a reply