1,846 research outputs found

    Incremental Predictive Process Monitoring: How to Deal with the Variability of Real Environments

    Full text link
    A characteristic of existing predictive process monitoring techniques is to first construct a predictive model based on past process executions, and then use it to predict the future of new ongoing cases, without the possibility of updating it with new cases when they complete their execution. This can make predictive process monitoring too rigid to deal with the variability of processes working in real environments that continuously evolve and/or exhibit new variant behaviors over time. As a solution to this problem, we propose the use of algorithms that allow the incremental construction of the predictive model. These incremental learning algorithms update the model whenever new cases become available so that the predictive model evolves over time to fit the current circumstances. The algorithms have been implemented using different case encoding strategies and evaluated on a number of real and synthetic datasets. The results provide a first evidence of the potential of incremental learning strategies for predicting process monitoring in real environments, and of the impact of different case encoding strategies in this setting

    Clustering-Based Predictive Process Monitoring

    Full text link
    Business process enactment is generally supported by information systems that record data about process executions, which can be extracted as event logs. Predictive process monitoring is concerned with exploiting such event logs to predict how running (uncompleted) cases will unfold up to their completion. In this paper, we propose a predictive process monitoring framework for estimating the probability that a given predicate will be fulfilled upon completion of a running case. The predicate can be, for example, a temporal logic constraint or a time constraint, or any predicate that can be evaluated over a completed trace. The framework takes into account both the sequence of events observed in the current trace, as well as data attributes associated to these events. The prediction problem is approached in two phases. First, prefixes of previous traces are clustered according to control flow information. Secondly, a classifier is built for each cluster using event data to discriminate between fulfillments and violations. At runtime, a prediction is made on a running case by mapping it to a cluster and applying the corresponding classifier. The framework has been implemented in the ProM toolset and validated on a log pertaining to the treatment of cancer patients in a large hospital

    Explain, Adapt and Retrain: How to improve the accuracy of a PPM classifier through different explanation styles

    Full text link
    Recent papers have introduced a novel approach to explain why a Predictive Process Monitoring (PPM) model for outcome-oriented predictions provides wrong predictions. Moreover, they have shown how to exploit the explanations, obtained using state-of-the art post-hoc explainers, to identify the most common features that induce a predictor to make mistakes in a semi-automated way, and, in turn, to reduce the impact of those features and increase the accuracy of the predictive model. This work starts from the assumption that frequent control flow patterns in event logs may represent important features that characterize, and therefore explain, a certain prediction. Therefore, in this paper, we (i) employ a novel encoding able to leverage DECLARE constraints in Predictive Process Monitoring and compare the effectiveness of this encoding with Predictive Process Monitoring state-of-the art encodings, in particular for the task of outcome-oriented predictions; (ii) introduce a completely automated pipeline for the identification of the most common features inducing a predictor to make mistakes; and (iii) show the effectiveness of the proposed pipeline in increasing the accuracy of the predictive model by validating it on different real-life datasets

    Genetic algorithms for hyperparameter optimization in predictive business process monitoring

    Get PDF
    Predictive business process monitoring exploits event logs to predict how ongoing (uncompleted) traces will unfold up to their completion. A predictive process monitoring framework collects a range of techniques that allow users to get accurate predictions about the achievement of a goal for a given ongoing trace. These techniques can be combined and their parameters configured in different framework instances. Unfortunately, a unique framework instance that is general enough to outperform others for every dataset, goal or type of prediction is elusive. Thus, the selection and configuration of a framework instance needs to be done for a given dataset. This paper presents a predictive process monitoring framework armed with a hyperparameter optimization method to select a suitable framework instance for a given dataset

    Process Discovery on Deviant Traces and Other Stranger Things

    Get PDF
    As the need to understand and formalise business processes into a model has grown over the last years, the process discovery research field has gained more and more importance, developing two different classes of approaches to model representation: procedural and declarative. Orthogonally to this classification, the vast majority of works envisage the discovery task as a one-class supervised learning process guided by the traces that are recorded into an input log. In this work instead, we focus on declarative processes and embrace the less-popular view of process discovery as a binary supervised learning task, where the input log reports both examples of the normal system execution, and traces representing a “stranger” behaviour according to the domain semantics. We therefore deepen how the valuable information brought by both these two sets can be extracted and formalised into a model that is “optimal” according to user-defined goals. Our approach, namely NegDis, is evaluated w.r.t. other relevant works in this field, and shows promising results regarding both the performance and the quality of the obtained solution

    Outcome-Oriented Prescriptive Process Monitoring Based on Temporal Logic Patterns

    Full text link
    Prescriptive Process Monitoring systems recommend, during the execution of a business process, interventions that, if followed, prevent a negative outcome of the process. Such interventions have to be reliable, that is, they have to guarantee the achievement of the desired outcome or performance, and they have to be flexible, that is, they have to avoid overturning the normal process execution or forcing the execution of a given activity. Most of the existing Prescriptive Process Monitoring solutions, however, while performing well in terms of recommendation reliability, provide the users with very specific (sequences of) activities that have to be executed without caring about the feasibility of these recommendations. In order to face this issue, we propose a new Outcome-Oriented Prescriptive Process Monitoring system recommending temporal relations between activities that have to be guaranteed during the process execution in order to achieve a desired outcome. This softens the mandatory execution of an activity at a given point in time, thus leaving more freedom to the user in deciding the interventions to put in place. Our approach defines these temporal relations with Linear Temporal Logic over finite traces patterns that are used as features to describe the historical process data recorded in an event log by the information systems supporting the execution of the process. Such encoded log is used to train a Machine Learning classifier to learn a mapping between the temporal patterns and the outcome of a process execution. The classifier is then queried at runtime to return as recommendations the most salient temporal patterns to be satisfied to maximize the likelihood of a certain outcome for an input ongoing process execution. The proposed system is assessed using a pool of 22 real-life event logs that have already been used as a benchmark in the Process Mining community.Comment: 38 pages, 6 figures, 8 table

    Life Monza: project description and actions’ updating

    Get PDF
    The introduction of Low Emission Zones, urban areas subject to road traffic restrictions in order to ensure compliance with the air pollutants limit values set by the European Directive on ambient air quality (2008/50/EC), is a common and well-established action in the administrative government of cities. The impacts on air quality improvement are widely analysed, whereas the effects and benefits concerning the noise have not been addressed in a comprehensive manner. As a consequence, the definition, the criteria for the analysis and the management methods of a Noise Low Emission Zone are not clearly expressed and shared yet. The LIFE MONZA project (Methodologies fOr Noise low emission Zones introduction And management - LIFE15 ENV/IT/000586) addresses these issues. The first objective of the project, co-funded by the European Commission, is to introduce an easy-replicable method for the identification and the management of the Noise Low Emission Zone, an urban area subject to traffic restrictions, whose impacts and benefits regarding noise issues will be analyzed and tested in the pilot area of the city of Monza, located in Northern Italy. Background conditions, structure, objectives of the project and actions’ progress will be discussed in this article

    Seasonal Biotic Processes Vary the Carbon Turnover by Up To One Order of Magnitude in Wetlands

    Get PDF
    Soil Organic Carbon (SOC) turnover t in wetlands and the corresponding governing processes are still poorly represented in numerical models. t is a proxy to the carbon storage potential in each SOC pool and C fluxes within the whole ecosystem; however, it has not been comprehensively quantified in wetlands globally. Here, we quantify the turnover time t of various SOC pools and the governing biotic and abiotic processes in global wetlands using a comprehensively tested process-based biogeochemical model. Globally, we found that t ranges between 1 and 1,000 years and is controlled by anaerobic (in 78% of global wetlands area) and aerobic (15%) respiration, and by abiotic destabilization from soil minerals (5%). t in the remaining 2% of wetlands is controlled by denitrification, sulfur reduction, and leaching below the subsoil. t can vary by up to one order of magnitude in temperate, continental, and polar regions due to seasonal temperature and can shift from being aerobically controlled to anaerobically controlled. Our findings of seasonal variability in SOC turnover suggest that wetlands are susceptible to climate-induced shifts in seasonality, thus requiring better accounting of seasonal fluctuations at geographic scales to estimate C exchanges between land and atmosphere
    • …
    corecore