14 research outputs found
A recursive paradigm for aligning observed behavior of large structured process models
The alignment of observed and modeled behavior is a crucial problem in process mining, since it opens the door for conformance checking and enhancement of process models. The state of the art techniques for the computation of alignments rely on a full exploration of the combination of the model state space and the observed behavior (an event log), which hampers their applicability for large instances. This paper presents a fresh view to the alignment problem: the computation of alignments is casted as the resolution of Integer Linear Programming models, where the user can decide the granularity of the alignment steps. Moreover, a novel recursive strategy is used to split
the problem into small pieces, exponentially reducing the complexity of the ILP models to be solved. The contributions of this paper represent a promising alternative to fight the inherent complexity of computing alignments for large instances.Peer ReviewedPostprint (author's final draft
A Framework for Online Conformance Checking
Conformance checking – a branch of process mining – focuses on establishing to what extent actual executions of a process are in line with the expected behavior of a reference model. Current conformance checking techniques only allow for a-posteriori analysis: the amount of (non-)conformant behavior is quantified after the completion of the process instance. In this paper we propose a framework for online conformance checking: not only do we quantify (non-)conformant behavior as the execution is running, we also restrict the computation to constant time complexity per event analyzed, thus enabling the online analysis of a stream of events. The framework is instantiated with ideas coming from the theory of regions, and state similarity. An implementation is available in ProM and promising results have been obtained.Peer ReviewedPostprint (author's final draft
A unified approach for measuring precision and generalization based on anti-alignments
The holy grail in process mining is an algorithm that, given an event log, produces fitting, precise, properly generalizing and simple process models. While there is consensus on the existence of solid metrics for fitness and simplicity, current metrics for precision and generalization have important flaws, which hamper their applicability in a general setting. In this paper, a novel approach to measure precision and generalization is presented, which relies on the notion of antialignments. An anti-alignment describes highly deviating model traces with respect to observed behavior. We propose metrics for precision and generalization that resemble the leave-one-out cross-validation techniques, where individual traces of the log are removed and the computed anti-alignment assess the model’s capability to describe precisely or generalize the observed behavior. The metrics have been implemented in ProM and tested on several examples.Peer Reviewe
Artifact-Driven Monitoring for Human-Centric Business Processes with Smart Devices: Assessment and Improvement
Monitoring human-centric business processes requires human operators to manually notify to a BPMS when activities start or end. Even if nowadays smart devices, like smartphones and tablets, are adopted to make the transmission of these notifications easier, such devices usually hold a passive role, being a simple mediator between the BPMS and human operators.
In this paper, we adopt the Internet of Things (IoT) paradigm by envisioning an artifact-driven process monitoring where all the objects interacting with a business process instance can be coupled with a smart device to actively detect when process activities start or end. To support the artifact-driven monitoring, we propose an ontology-based approach to assess and improve the monitorability of a process model
Incorporating Negative Information in Process Discovery
The discovery of a formal process model from event logs describing real process executions is a challenging problem that has been studied from several angles. Most of the contributions consider the extraction of a model as a semi-supervised problem where only positive information is available. In this paper we present a fresh look at process discovery where also negative information can be taken into account. This feature may be crucial for deriving process models which are not only simple, fitting and precise, but also good on generalizing the right behavior underlying an event log. The technique is based on numerical abstract domains and Satisfiability Modulo Theories (SMT), and can be combined with any process discovery technique. As an example, we show in detail how to supervise a recent technique that uses numerical abstract domains. Experiments performed in our prototype implementation show the effectiveness of the techniques and the ability to improve the results produced by selected discovery techniques.Peer Reviewe
Relabelling LTS for Petri Net Synthesis via Solving Separation Problems
Petri net synthesis deals with finding an unlabelled Petri net with a reachability graph isomorphic to a given usually finite labelled transition system (LTS). If there is no solution for a synthesis problem, we use label splitting. This means that we relabel edges until the LTS becomes synthesisable. We obtain an unlabelled Petri net and a relabelling function, which together form a labelled Petri net with the original, intended behaviour. By careful selection of the edges to relabel we hope to keep the alphabet of the LTS and the constructed Petri net as small as possible. Even approximation algorithms, not yielding an optimal relabelling, are hard to come by. Using region theory, we develop a polynomial heuristic based on two kinds of separation problems. These either demand distinct Petri net markings for distinct LTS states or a correspondence between the existence of an edge in the LTS and the activation of a transition under the state’s marking. If any separation problem is not solvable, relabelling of edges in the LTS becomes necessary. We show efficient ways to choose those edges
Fast incremental conformance analysis for interactive process discovery
\u3cp\u3eInteractive process discovery allows users to specify domain knowledge while discovering process models with the help of event logs. Typically the coherence of an event log and a process model is calculated using conformance analysis. Many state-of-the-art conformance techniques emphasize on the correctness of the results, and hence can be slow, impractical and undesirable in interactive process discovery setting, especially when the process models are complex. In this paper, we present a framework (and its application) to calculate conformance fast enough to guide the user in interactive process discovery. The proposed framework exploits the underlying techniques used for interactive process discovery in order to incrementally update the conformance results. We trade the accuracy of conformance for performance. However, the user is also provided with some diagnostic information, which can be useful for decision making in an interactive process discovery setting. The results show that our approach can be considerably faster than the traditional approaches and hence better suited in an interactive setting.\u3c/p\u3