1,026 research outputs found

    Passages in Graphs

    Full text link
    Directed graphs can be partitioned in so-called passages. A passage P is a set of edges such that any two edges sharing the same initial vertex or sharing the same terminal vertex are both inside PP or are both outside of P. Passages were first identified in the context of process mining where they are used to successfully decompose process discovery and conformance checking problems. In this article, we examine the properties of passages. We will show that passages are closed under set operators such as union, intersection and difference. Moreover, any passage is composed of so-called minimal passages. These properties can be exploited when decomposing graph-based analysis and computation problems.Comment: 8 page

    Heuristic Approaches for Generating Local Process Models through Log Projections

    Full text link
    Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i.e. subsets of possible events are taken into account to create so-called local process models. Often such smaller models provide valuable insights into the behavior of the process, especially when no adequate and comprehensible single overall process model exists that is able to describe the traces of the process from start to end. The practical application of LPM discovery is however hindered by computational issues in the case of logs with many activities (problems may already occur when there are more than 17 unique activities). In this paper, we explore three heuristics to discover subsets of activities that lead to useful log projections with the goal of speeding up LPM discovery considerably while still finding high-quality LPMs. We found that a Markov clustering approach to create projection sets results in the largest improvement of execution time, with discovered LPMs still being better than with the use of randomly generated activity sets of the same size. Another heuristic, based on log entropy, yields a more moderate speedup, but enables the discovery of higher quality LPMs. The third heuristic, based on the relative information gain, shows unstable performance: for some data sets the speedup and LPM quality are higher than with the log entropy based method, while for other data sets there is no speedup at all.Comment: paper accepted and to appear in the proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM), special session on Process Mining, part of the Symposium Series on Computational Intelligence (SSCI

    Decomposed process discovery and conformance checking

    Get PDF
    Decomposed process discovery and decomposed conformance checking are the corresponding variants of the two monolithic fundamental problems in process mining (van der Aalst 2011): automated process discovery, which considers the problem of discovering a process model from an event log (Leemans 2009), and conformance checking, which addresses the problem of analyzing the adequacy of a process model with respect to observed behavior (Munoz-Gama 2009), respectively. The term decomposed in the two definitions is mainly describing the way the two problems are tackled operationally, to face their computational complexity by splitting the initial problem into smaller problems, that can be solved individually and often more efficiently.Postprint (author's final draft

    Repairing Alignments of Process Models

    Get PDF
    Process mining represents a collection of data driven techniques that support the analysis, understanding and improvement of business processes. A core branch of process mining is conformance checking, i.e., assessing to what extent a business process model conforms to observed business process execution data. Alignments are the de facto standard instrument to compute such conformance statistics. However, computing alignments is a combinatorial problem and hence extremely costly. At the same time, many process models share a similar structure and/or a great deal of behavior. For collections of such models, computing alignments from scratch is inefficient, since large parts of the alignments are likely to be the same. This paper presents a technique that exploits process model similarity and repairs existing alignments by updating those parts that do not fit a given process model. The technique effectively reduces the size of the combinatorial alignment problem, and hence decreases computation time significantly. Moreover, the potential loss of optimality is limited and stays within acceptable bounds

    Decomposing conformance checking on Petri nets with data

    Get PDF
    Process mining techniques relate observed behavior to modeled behavior, e.g., the automatic discovery of a Petri net based on an event log. Process mining is not limited to process discovery and also includes conformance checking. Conformance checking techniques are used for evaluating the quality of discovered process models and to diagnose deviations from some normative model (e.g., to check compliance). Existing conformance checking approaches typically focus on the control flow, thus being unable to diagnose deviations concerning data. This paper proposes a technique to check the conformance of data-aware process models. We use so-called "data Petri nets" to model data variables, guards, and read/write actions. Additional perspectives such as resource allocation and time constraints can be encoded in terms of variables. Data-aware conformance checking problem may be very time consuming and sometimes even intractable when there are many transitions and data variables. Therefore, we propose a technique to decompose large data-aware conformance checking problems into smaller problems that can be solved more efficiently. We provide a general correctness result showing that decomposition does not influence the outcome of conformance checking. Moreover, two decomposition strategies are presented. The approach is supported through ProM plug-ins and experimental results show that significant performance improvements are indeed possible

    Conformance checking: A state-of-the-art literature review

    Full text link
    Conformance checking is a set of process mining functions that compare process instances with a given process model. It identifies deviations between the process instances' actual behaviour ("as-is") and its modelled behaviour ("to-be"). Especially in the context of analyzing compliance in organizations, it is currently gaining momentum -- e.g. for auditors. Researchers have proposed a variety of conformance checking techniques that are geared towards certain process model notations or specific applications such as process model evaluation. This article reviews a set of conformance checking techniques described in 37 scholarly publications. It classifies the techniques along the dimensions "modelling language", "algorithm type", "quality metric", and "perspective" using a concept matrix so that the techniques can be better accessed by practitioners and researchers. The matrix highlights the dimensions where extant research concentrates and where blind spots exist. For instance, process miners use declarative process modelling languages often, but applications in conformance checking are rare. Likewise, process mining can investigate process roles or process metrics such as duration, but conformance checking techniques narrow on analyzing control-flow. Future research may construct techniques that support these neglected approaches to conformance checking

    Recommender System Based on Process Mining

    Get PDF
    Automation of repetitive tasks can be achieved with Robotic Process Automation (RPA) using scripts that encode fine-grained interactions with software applications on desktops and the web. Automating these processes can be achieved through several applications. It is possible for users to record desktop activity, including metadata, with these tools. The very fine-grained steps in the processes contain details about very small steps that the user takes. Several steps are involved in this process, including clicking on buttons, typing text, selecting the text, and changing the focus. Automating these processes requires connectors connecting them to the appropriate applications. Currently, users choose these connectors manually rather than automatically being linked to processes. In this thesis, we propose a method for recommending the top-k suitable connectors based on event logs for each process. This method indicates that we can use process discovery, create the process models of the train processes with identified connectors, and calculate the conformance checking between the process models and test event logs (unknown connectors). Then we select top-k maximum values of the conformance checking results and observe that we have the suitable connector with 80% accuracy among the top-3 recommended connectors. This solution can be configurable by changing the parameters and the methods of process discovery and conformance checking.Automation of repetitive tasks can be achieved with Robotic Process Automation (RPA) using scripts that encode fine-grained interactions with software applications on desktops and the web. Automating these processes can be achieved through several applications. It is possible for users to record desktop activity, including metadata, with these tools. The very fine-grained steps in the processes contain details about very small steps that the user takes. Several steps are involved in this process, including clicking on buttons, typing text, selecting the text, and changing the focus. Automating these processes requires connectors connecting them to the appropriate applications. Currently, users choose these connectors manually rather than automatically being linked to processes. In this thesis, we propose a method for recommending the top-k suitable connectors based on event logs for each process. This method indicates that we can use process discovery, create the process models of the train processes with identified connectors, and calculate the conformance checking between the process models and test event logs (unknown connectors). Then we select top-k maximum values of the conformance checking results and observe that we have the suitable connector with 80% accuracy among the top-3 recommended connectors. This solution can be configurable by changing the parameters and the methods of process discovery and conformance checking

    Conformance Checking of Large Process Model: An Approach based on Decomposition

    Get PDF
    Conformance checking is the problem to pinpoint deviations between how processes are executed in reality and how processes are expected to be performed according to norms, regulations and protocols. The executions are recorded in event logs, while the expected behaviors are encoded in a process model. The complexity of the problem is exponential with respect to the size of the model, this makes the problem not scale when models become very large. To keep the problem tractable, one can decompose the model into parts for which conformance checking is carried out

    Time-based α+ miner for modelling business processes using temporal pattern

    Get PDF
    Business processes are implemented in an organization. When a business process is run, it generates event log. One type of event log is double timestamp event log. Double timestamp has the start and complete time of each activity executed in the business process and has a close relationship with temporal pattern. In this paper, seven types of temporal pattern between activities were presented as extended version of relations used in the double timestamp event log. Since the event log was not always executed in sequential way, therefore using temporal pattern, event log was divided into several small groups to mine the business process both sequential and parallel. Both temporal pattern and Time-based α+ Miner algorithm were used to mine process model, determined sequential and parallel relations and then evaluated the process model using fitness value. This paper was focused on the advantages of temporal pattern implemented in Time-based α+ Miner algorithm to mine business process. The results also clearly stated that the proposed method could present better result rather than that of original α+ Miner algorithm
    • …
    corecore