15,773 research outputs found

    Log-based Evaluation of Label Splits for Process Models

    Get PDF
    Process mining techniques aim to extract insights in processes from event logs. One of the challenges in process mining is identifying interesting and meaningful event labels that contribute to a better understanding of the process. Our application area is mining data from smart homes for elderly, where the ultimate goal is to signal deviations from usual behavior and provide timely recommendations in order to extend the period of independent living. Extracting individual process models showing user behavior is an important instrument in achieving this goal. However, the interpretation of sensor data at an appropriate abstraction level is not straightforward. For example, a motion sensor in a bedroom can be triggered by tossing and turning in bed or by getting up. We try to derive the actual activity depending on the context (time, previous events, etc.). In this paper we introduce the notion of label refinements, which links more abstract event descriptions with their more refined counterparts. We present a statistical evaluation method to determine the usefulness of a label refinement for a given event log from a process perspective. Based on data from smart homes, we show how our statistical evaluation method for label refinements can be used in practice. Our method was able to select two label refinements out of a set of candidate label refinements that both had a positive effect on model precision.Comment: Paper accepted at the 20th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, to appear in Procedia Computer Scienc

    Clustering-Based Predictive Process Monitoring

    Full text link
    Business process enactment is generally supported by information systems that record data about process executions, which can be extracted as event logs. Predictive process monitoring is concerned with exploiting such event logs to predict how running (uncompleted) cases will unfold up to their completion. In this paper, we propose a predictive process monitoring framework for estimating the probability that a given predicate will be fulfilled upon completion of a running case. The predicate can be, for example, a temporal logic constraint or a time constraint, or any predicate that can be evaluated over a completed trace. The framework takes into account both the sequence of events observed in the current trace, as well as data attributes associated to these events. The prediction problem is approached in two phases. First, prefixes of previous traces are clustered according to control flow information. Secondly, a classifier is built for each cluster using event data to discriminate between fulfillments and violations. At runtime, a prediction is made on a running case by mapping it to a cluster and applying the corresponding classifier. The framework has been implemented in the ProM toolset and validated on a log pertaining to the treatment of cancer patients in a large hospital

    Learning Hybrid Process Models From Events: Process Discovery Without Faking Confidence

    Full text link
    Process discovery techniques return process models that are either formal (precisely describing the possible behaviors) or informal (merely a "picture" not allowing for any form of formal reasoning). Formal models are able to classify traces (i.e., sequences of events) as fitting or non-fitting. Most process mining approaches described in the literature produce such models. This is in stark contrast with the over 25 available commercial process mining tools that only discover informal process models that remain deliberately vague on the precise set of possible traces. There are two main reasons why vendors resort to such models: scalability and simplicity. In this paper, we propose to combine the best of both worlds: discovering hybrid process models that have formal and informal elements. As a proof of concept we present a discovery technique based on hybrid Petri nets. These models allow for formal reasoning, but also reveal information that cannot be captured in mainstream formal models. A novel discovery algorithm returning hybrid Petri nets has been implemented in ProM and has been applied to several real-life event logs. The results clearly demonstrate the advantages of remaining "vague" when there is not enough "evidence" in the data or standard modeling constructs do not "fit". Moreover, the approach is scalable enough to be incorporated in industrial-strength process mining tools.Comment: 25 pages, 12 figure

    Conformance Checking Based on Multi-Perspective Declarative Process Models

    Full text link
    Process mining is a family of techniques that aim at analyzing business process execution data recorded in event logs. Conformance checking is a branch of this discipline embracing approaches for verifying whether the behavior of a process, as recorded in a log, is in line with some expected behaviors provided in the form of a process model. The majority of these approaches require the input process model to be procedural (e.g., a Petri net). However, in turbulent environments, characterized by high variability, the process behavior is less stable and predictable. In these environments, procedural process models are less suitable to describe a business process. Declarative specifications, working in an open world assumption, allow the modeler to express several possible execution paths as a compact set of constraints. Any process execution that does not contradict these constraints is allowed. One of the open challenges in the context of conformance checking with declarative models is the capability of supporting multi-perspective specifications. In this paper, we close this gap by providing a framework for conformance checking based on MP-Declare, a multi-perspective version of the declarative process modeling language Declare. The approach has been implemented in the process mining tool ProM and has been experimented in three real life case studies

    Visual analysis of sensor logs in smart spaces: Activities vs. situations

    Get PDF
    Models of human habits in smart spaces can be expressed by using a multitude of representations whose readability influences the possibility of being validated by human experts. Our research is focused on developing a visual analysis pipeline (service) that allows, starting from the sensor log of a smart space, to graphically visualize human habits. The basic assumption is to apply techniques borrowed from the area of business process automation and mining on a version of the sensor log preprocessed in order to translate raw sensor measurements into human actions. The proposed pipeline is employed to automatically extract models to be reused for ambient intelligence. In this paper, we present an user evaluation aimed at demonstrating the effectiveness of the approach, by comparing it wrt. a relevant state-of-the-art visual tool, namely SITUVIS

    Specification-Driven Predictive Business Process Monitoring

    Full text link
    Predictive analysis in business process monitoring aims at forecasting the future information of a running business process. The prediction is typically made based on the model extracted from historical process execution logs (event logs). In practice, different business domains might require different kinds of predictions. Hence, it is important to have a means for properly specifying the desired prediction tasks, and a mechanism to deal with these various prediction tasks. Although there have been many studies in this area, they mostly focus on a specific prediction task. This work introduces a language for specifying the desired prediction tasks, and this language allows us to express various kinds of prediction tasks. This work also presents a mechanism for automatically creating the corresponding prediction model based on the given specification. Differently from previous studies, instead of focusing on a particular prediction task, we present an approach to deal with various prediction tasks based on the given specification of the desired prediction tasks. We also provide an implementation of the approach which is used to conduct experiments using real-life event logs.Comment: This article significantly extends the previous work in https://doi.org/10.1007/978-3-319-91704-7_7 which has a technical report in arXiv:1804.00617. This article and the previous work have a coauthor in commo

    Discovering learning processes using inductive miner: A case study with learning management systems (LMSs)

    Get PDF
    Resumen tomado de la publicaciónDescubriendo procesos de aprendizaje aplicando Inductive Miner: un estudio de caso en Learning Management Systems (LMSs). Antecedentes: en la minería de procesos con datos educativos se utilizan diferentes algoritmos para descubrir modelos, sobremanera el Alpha Miner, el Heuristic Miner y el Evolutionary Tree Miner. En este trabajo proponemos la implementación de un nuevo algoritmo en datos educativos, el denominado Inductive Miner. Método: hemos utilizado datos de interacción de 101 estudiantes universitarios en una asignatura de grado desarrollada en la plataforma Moodle 2.0. Una vez prepocesados se ha realizado la minería de procesos sobre 21.629 eventos para descubrir los modelos que generan los diferentes algoritmos y comparar sus medidas de ajuste, precisión, simplicidad y generalización. Resultados: en las pruebas realizadas en nuestro conjunto de datos el algoritmo Inductive Miner es el que obtiene mejores resultados, especialmente para el valor de ajuste, criterio de mayor relevancia en lo que respecta al descubrimiento de modelos. Además, cuando ponderamos con pesos las diferentes métricas seguimos obteniendo la mejor medida general con el Inductive Miner. Conclusiones: la implementación de Inductive Miner en datos educativos es una nueva aplicación que, además de obtener mejores resultados que otros algoritmos con nuestro conjunto de datos, proporciona modelos válidos e interpretables en términos educativos.Universidad de Oviedo. Biblioteca de Psicología; Plaza Feijoo, s/n.; 33003 Oviedo; Tel. +34985104146; Fax +34985104126; [email protected]

    Explaining Violation Traces with Finite State Natural Language Generation Models

    Full text link
    An essential element of any verification technique is that of identifying and communicating to the user, system behaviour which leads to a deviation from the expected behaviour. Such behaviours are typically made available as long traces of system actions which would benefit from a natural language explanation of the trace and especially in the context of business logic level specifications. In this paper we present a natural language generation model which can be used to explain such traces. A key idea is that the explanation language is a CNL that is, formally speaking, regular language susceptible transformations that can be expressed with finite state machinery. At the same time it admits various forms of abstraction and simplification which contribute to the naturalness of explanations that are communicated to the user

    Exploring Interpretability for Predictive Process Analytics

    Full text link
    Modern predictive analytics underpinned by machine learning techniques has become a key enabler to the automation of data-driven decision making. In the context of business process management, predictive analytics has been applied to making predictions about the future state of an ongoing business process instance, for example, when will the process instance complete and what will be the outcome upon completion. Machine learning models can be trained on event log data recording historical process execution to build the underlying predictive models. Multiple techniques have been proposed so far which encode the information available in an event log and construct input features required to train a predictive model. While accuracy has been a dominant criterion in the choice of various techniques, they are often applied as a black-box in building predictive models. In this paper, we derive explanations using interpretable machine learning techniques to compare and contrast the suitability of multiple predictive models of high accuracy. The explanations allow us to gain an understanding of the underlying reasons for a prediction and highlight scenarios where accuracy alone may not be sufficient in assessing the suitability of techniques used to encode event log data to features used by a predictive model. Findings from this study motivate the need and importance to incorporate interpretability in predictive process analytics.Comment: 15 pages, 7 figure
    corecore