10 research outputs found

    Encoding conformance checking artefacts in SAT

    Get PDF
    Conformance checking strongly relies on the computation of artefacts, which enable reasoning on the relation between observed and modeled behavior. This paper shows how important conformance artefacts like alignments, anti-alignments or even multi-alignments, defined over the edit distance, can be computed by encoding the problem as a SAT instance. From a general perspective, the work advocates for a unified family of techniques that can compute conformance artefacts in the same way. The prototype implementation of the techniques presented in this paper show capabilities for dealing with some of the current benchmarks, and potential for the near future when optimizations similar to the ones in the literature are incorporated.Peer ReviewedPostprint (author's final draft

    Использование журналов событий для локальной корректировки моделей процессов

    Get PDF
    During the life-cycle of an Information System (IS) its actual behaviour may not correspond to the original system model. However, to the IS support it is very important to have the latest model that reflects the current system behaviour. To correct the model, the information from the event log of the system may be used. In this paper, we consider the problem of process model adjustment (correction) using the information from an event log. The input data for this task are the initial process model (a Petri net) and the event log. The result of correction should be a new process model, better reflecting the real IS behavior than the initial model. The new model could be also built from scratch, for example, with the help of one of the known algorithms for automatic synthesis of the process model from an event log. However, this may lead to crucial changes in the structure of the original model, and it will be difficult to compare the new model with the initial one, hindering its understanding and analysis. It is important to keep the initial structure of the model as much as possible. In this paper, we propose a method for process model correction based on the principle of “divide and conquer”. The initial model is decomposed in several fragments. For each fragment its conformance to the event log is checked. Fragments which do not match the log are replaced by newly synthesized ones. The new model is then assembled from the fragments via transition fusion. The experiments demonstrate that our correction algorithm gives good results when it is used for correcting local discrepancies. The paper presents the description of the algorithm, the formal justification for its correctness, as well as the results of experimental testing by some artificial examples.В ходе жизненного цикла информационной системы (ИС) ее реальное поведение может перестать соответствовать исходной модели системы. Между тем для поддержки системы очень важно иметь актуальную модель, отражающую текущее поведение системы. Для корректировки модели можно использовать информацию из журнала событий системы. Журналы событий процессно-ориентированных информационных систем содержат запись истории исполнения поддерживаемых процессов в виде более или менее детальных списков событий. Такие журналы, как правило, записываются всеми современным ИС. Эта информация может использоваться для анализа реального поведения ИС и ее усовершенствования. В работе рассматривается задача корректировки (исправления) модели процесса на основе информации из журнала событий. Исходными данными для этой задачи являются первоначальная модель процесса в виде сети Петри и журнал событий. Результатом корректировки должна быть новая модель процесса, лучше отображающая реальное поведение ИС, чем исходная модель. Актуальная модель может быть построена и полностью заново, например, с помощью одного из известных алгоритмов автоматического синтеза модели процесса по журналу событий. Однако структура исходной модели при этом может полностью измениться. Полученную модель будет трудно сопоставить с прежней моделью процесса, что затруднит ее понимание и анализ. Поэтому при корректировке модели важно по возможности сохранить ее прежнюю структуру. Предлагаемый в настоящей работе алгоритм корректировки модели основан на принципе «разделяй и властвуй». Исходная модель процесса декомпозируется на фрагменты. Для каждого из фрагментов проверяется, соответствует ли он актуальному журналу событий. Фрагменты, для которых выявлены несоответствия, заменяются на заново синтезированные. Новая модель собирается из фрагментов путем слияния переходов. Проведенные эксперименты показывают, что наш алгоритм корректировки дает хорошие результаты, если применяется для исправления локальных несоответствий. Работа содержит описание алгоритма, формальное обоснование его корректности, а также результаты экспериментального тестирования на искусственных примерах.

    A unified approach for measuring precision and generalization based on anti-alignments

    Get PDF
    The holy grail in process mining is an algorithm that, given an event log, produces fitting, precise, properly generalizing and simple process models. While there is consensus on the existence of solid metrics for fitness and simplicity, current metrics for precision and generalization have important flaws, which hamper their applicability in a general setting. In this paper, a novel approach to measure precision and generalization is presented, which relies on the notion of antialignments. An anti-alignment describes highly deviating model traces with respect to observed behavior. We propose metrics for precision and generalization that resemble the leave-one-out cross-validation techniques, where individual traces of the log are removed and the computed anti-alignment assess the model’s capability to describe precisely or generalize the observed behavior. The metrics have been implemented in ProM and tested on several examples.Peer Reviewe

    A unified approach for measuring precision and generalization based on anti-alignments

    No full text
    The holy grail in process mining is an algorithm that, given an event log, produces fitting, precise, properly generalizing and simple process models. While there is consensus on the existence of solid metrics for fitness and simplicity, current metrics for precision and generalization have important flaws, which hamper their applicability in a general setting. In this paper, a novel approach to measure precision and generalization is presented, which relies on the notion of antialignments. An anti-alignment describes highly deviating model traces with respect to observed behavior. We propose metrics for precision and generalization that resemble the leave-one-out cross-validation techniques, where individual traces of the log are removed and the computed anti-alignment assess the model’s capability to describe precisely or generalize the observed behavior. The metrics have been implemented in ProM and tested on several examples

    A unified approach for measuring precision and generalization based on anti-alignments

    No full text
    The holy grail in process mining is an algorithm that, given an event log, produces fitting, precise, properly generalizing and simple process models. While there is consensus on the existence of solid metrics for fitness and simplicity, current metrics for precision and generalization have important flaws, which hamper their applicability in a general setting. In this paper, a novel approach to measure precision and generalization is presented, which relies on the notion of antialignments. An anti-alignment describes highly deviating model traces with respect to observed behavior. We propose metrics for precision and generalization that resemble the leave-one-out cross-validation techniques, where individual traces of the log are removed and the computed anti-alignment assess the model’s capability to describe precisely or generalize the observed behavior. The metrics have been implemented in ProM and tested on several examples.Peer Reviewe
    corecore