2,030 research outputs found

    Enhancing workflow-nets with data for trace completion

    Full text link
    The growing adoption of IT-systems for modeling and executing (business) processes or services has thrust the scientific investigation towards techniques and tools which support more complex forms of process analysis. Many of them, such as conformance checking, process alignment, mining and enhancement, rely on complete observation of past (tracked and logged) executions. In many real cases, however, the lack of human or IT-support on all the steps of process execution, as well as information hiding and abstraction of model and data, result in incomplete log information of both data and activities. This paper tackles the issue of automatically repairing traces with missing information by notably considering not only activities but also data manipulated by them. Our technique recasts such a problem in a reachability problem and provides an encoding in an action language which allows to virtually use any state-of-the-art planning to return solutions

    Data Flow Correctness in Adaptive Workflow Systems

    Get PDF
    Enterprises must be able to quickly adapt their business processes to react to changes in their environment. Needed business agility is often hindered by the lacking flexibility of contemporary workflow systems. In response to this inflexibility, adaptive workflow systems have emerged, which enable the dynamic adaptation of running workflows. One of the most important challenges in this context is to avoid inconsistencies and errors. So far, approaches providing respective correctness criteria for dynamic workflow change have mainly focused on control flow correctness (e.g., avoidance of deadlocks). However, little attention has been paid to data flow correctness even though this is crucial for any application of dynamic workflow change in practice. Specifically, missing or inconsistent input data of workflow activities, for example, can lead to blocking or breakdown of the underlying workflow system. This paper deals with fundamental challenges related to data flow correctness. We revisit and discuss data flow correctness at different phases of the workflow life cycle (i.e., buildtime and runtime), and show how data flow correctness can be ensured in an efficient way when dynamically changing a workflow

    Adaptive Process Management in Cyber-Physical Domains

    Get PDF
    The increasing application of process-oriented approaches in new challenging cyber-physical domains beyond business computing (e.g., personalized healthcare, emergency management, factories of the future, home automation, etc.) has led to reconsider the level of flexibility and support required to manage complex processes in such domains. A cyber-physical domain is characterized by the presence of a cyber-physical system coordinating heterogeneous ICT components (PCs, smartphones, sensors, actuators) and involving real world entities (humans, machines, agents, robots, etc.) that perform complex tasks in the “physical” real world to achieve a common goal. The physical world, however, is not entirely predictable, and processes enacted in cyber-physical domains must be robust to unexpected conditions and adaptable to unanticipated exceptions. This demands a more flexible approach in process design and enactment, recognizing that in real-world environments it is not adequate to assume that all possible recovery activities can be predefined for dealing with the exceptions that can ensue. In this chapter, we tackle the above issue and we propose a general approach, a concrete framework and a process management system implementation, called SmartPM, for automatically adapting processes enacted in cyber-physical domains in case of unanticipated exceptions and exogenous events. The adaptation mechanism provided by SmartPM is based on declarative task specifications, execution monitoring for detecting failures and context changes at run-time, and automated planning techniques to self-repair the running process, without requiring to predefine any specific adaptation policy or exception handler at design-time

    Applying Process Mining Algorithms in the Context of Data Collection Scenarios

    Get PDF
    Despite the technological progress, paper-based questionnaires are still widely used to collect data in many application domains like education, healthcare or psychology. To facilitate the enormous amount of work involved in collecting, evaluating and analyzing this data, a system enabling process-driven data collection was developed. Based on generic tools, a process-driven approach for creating, processing and analyzing questionnaires was realized, in which a questionnaire is defined in terms of a process model. Due to this characteristic, process mining algorithms may be applied to event logs created during the execution of questionnaires. Moreover, new data that might not have been used in the context of questionnaires before may be collected and analyzed to provide new insights in regard to both the participant and the questionnaire. This thesis shows that process mining algorithms may be applied successfully to process-oriented questionnaires. Algorithms from the three process mining forms of process discovery, conformance checking and enhancement are applied and used for various analysis. The analysis of certain properties of discovered process models leads to new ways of generating information from questionnaires. Different techniques for conformance checking and their applicability in the context of questionnaires are evaluated. Furthermore, new data that cannot be collected from paper-based questionnaires is used to enhance questionnaires to reveal new and meaningful relationships

    Obstructions in Security-Aware Business Processes

    Get PDF
    This Open Access book explores the dilemma-like stalemate between security and regulatory compliance in business processes on the one hand and business continuity and governance on the other. The growing number of regulations, e.g., on information security, data protection, or privacy, implemented in increasingly digitized businesses can have an obstructive effect on the automated execution of business processes. Such security-related obstructions can particularly occur when an access control-based implementation of regulations blocks the execution of business processes. By handling obstructions, security in business processes is supposed to be improved. For this, the book presents a framework that allows the comprehensive analysis, detection, and handling of obstructions in a security-sensitive way. Thereby, methods based on common organizational security policies, process models, and logs are proposed. The Petri net-based modeling and related semantic and language-based research, as well as the analysis of event data and machine learning methods finally lead to the development of algorithms and experiments that can detect and resolve obstructions and are reproducible with the provided software

    Mining complete, precise and simple process models

    Get PDF
    Process discovery algorithms are generally used to discover the underlying process that has been followed to achieve an objective. In general, these algorithms do not take into account any domain knowledge to derive process models, allowing to apply them in a general manner. However, depending on the selected approach, a different kind of process models can be discovered, as each technique has its strengths and weaknesses, e.g., the expressiveness of the used notation. Hence, it is important to take into account the requirements of the domain when deciding which algorithm to use, as the correct assumptions can lead to richer process models. For instance, among the different domains of application of process mining we can identify several fields that share an interesting requirement about the discovered process models. In security audits, discovered processes have to fulfill strict requisites. This means that the process model should reproduce as much behavior as possible; otherwise some violations may go undetected (replay fitness). On the other hand, in order to avoid false positives, process models should reproduce only the recorded behavior (precision). Finally, process models should be easily readable to better detect deviations (simplicity). Another clear example concerns the educational domain, as in order to be of value for both teachers and learners, a discovered learning process should satisfy the aforementioned requirements. That is, to guarantee feasible and correct evaluations, teachers need to access to all the activities performed by learners, thereby the learning process should be able to reproduce as much behavior as possible (replay fitness). Furthermore, the learning process should focus on the recorded behavior seen in the event log (precision), i.e., show only what the students did, and not what they might have done, while being easily interpretable by the teachers (simplicity). One of the previous requirements is related to the readability of process models: simplicity. In process mining, one of the identified challenges is the appropriate visualization of process models, i.e., to present the results of process discovery in such a way that people actually gain insights about the process. Process models that are unnecessary complex can hinder the real behavior of the process rather than to provide an intuition of what is really happening in an organization. However, achieving a good level of readability is not always straightforward, for instance, due the used representation. Within the different approaches focused to reduce the complexity of a process model, the interest in this PhD Thesis relies on two techniques. On the one hand, to improve the readability of an already discovered process model through the inclusion of duplicate labels. On the other hand, the hierarchization of a process model, i.e., to provide a well known structure to the process model. However, regarding the latter, this technique requires to take into account domain knowledge, as different domains may rely on different requirements when improving the readability of the process model. In other words, in order to improve the interpretability and understandability of a process model, the hierarchization has to be driven by the domain. To sum up, concerning the aim of this PhD Thesis, we can identify two main topics of interest. On the one hand, we are interested in retrieving process models that reproduce as much behavior recorded in the log as possible, without introducing unseen behavior. On the other hand, we try to reduce the complexity of the mined models in order to improve their readability. Hence, the aim of this PhD Thesis is to discover process models considering replay fitness, precision and simplicity, while paying special attention in retrieving highly interpretable process models

    What's next? : operational support for business process execution

    Get PDF
    In the last decade flexibility has become an increasingly important in the area of business process management. Information systems that support the execution of the process are required to work in a dynamic environment that imposes changing demands on the execution of the process. In academia and industry a variety of paradigms and implementations has been developed to support flexibility. While on the one hand these approaches address the industry demands in flexibility, on the other hand, they result in confronting the user with many choices between different alternatives. As a consequence, methods to support users in selecting the best alternative during execution have become essential. In this thesis we introduce a formal framework for providing support to users based on historical evidence available in the execution log of the process. This thesis focuses on support by means of (1) recommendations that provide the user an ordered list of execution alternatives based on estimated utilities and (2) predictions that provide the user general statistics for each execution alternative. Typically, estimations are not an average over all observations, but they are based on observations for "similar" situations. The main question is what similarity means in the context of business process execution. We introduce abstractions on execution traces to capture similarity between execution traces in the log. A trace abstraction considers some trace characteristics rather than the exact trace. Traces that have identical abstraction values are said to be similar. The challenge is to determine those abstractions (characteristics) that are good predictors for the parameter to be estimated in the recommendation or prediction. We analyse the dependency between values of an abstraction and the mean of the parameter to be estimated by means of regression analysis. With regression we obtain a set of abstractions that explain the parameter to be estimated. Dependencies do not only play a role in providing predictions and recommendations to instances at run-time, but they are also essential for simulating the effect of changes in the environment on the processes, both locally and globally. We use stochastic simulation models to simulate the effect of changes in the environment, in particular changed probability distribution caused by recommendations. The novelty of these models is that they include dependencies between abstraction values and simulation parameters, which are estimated from log data. We demonstrate that these models give better approximations of reality than traditional models. A framework for offering operational support has been implemented in the context of the process mining framework ProM

    A Formal Semantics of Time Patterns for Process-aware Information Systems

    Get PDF
    Companies increasingly adopt process-aware information systems (PAISs) to coordinate, monitor and evolve their business processes. Although the proper handling of temporal constraints (e.g., deadlines and minimum time lags between activities) is crucial in many application domains, existing PAISs vary significantly regarding their support of the temporal perspective of business processes. Both the formal specification and operational support of temporal constraints constitute fundamental challenges in this context. In previous work, we introduced time patterns facilitating the comparison of PAISs in respect to their support of the temporal perspective and provided empirical evidence for them. To avoid ambiguities and to ease the use as well as implementation of the time patterns, this paper formally defines their semantics. To enable pattern use in a wide range of process modeling languages and pattern integration with existing PAISs, this semantics is expressed independent of a particular process meta model. Altogether, the presented pattern formalization will foster the integration of the temporal perspective in PAISs
    corecore