975 research outputs found

    Self-Adaptive Role-Based Access Control for Business Processes

    Get PDF
    © 2017 IEEE. We present an approach for dynamically reconfiguring the role-based access control (RBAC) of information systems running business processes, to protect them against insider threats. The new approach uses business process execution traces and stochastic model checking to establish confidence intervals for key measurable attributes of user behaviour, and thus to identify and adaptively demote users who misuse their access permissions maliciously or accidentally. We implemented and evaluated the approach and its policy specification formalism for a real IT support business process, showing their ability to express and apply a broad range of self-adaptive RBAC policies

    Conformance checking and performance improvement in scheduled processes: A queueing-network perspective

    No full text
    Service processes, for example in transportation, telecommunications or the health sector, are the backbone of today's economies. Conceptual models of service processes enable operational analysis that supports, e.g., resource provisioning or delay prediction. In the presence of event logs containing recorded traces of process execution, such operational models can be mined automatically.In this work, we target the analysis of resource-driven, scheduled processes based on event logs. We focus on processes for which there exists a pre-defined assignment of activity instances to resources that execute activities. Specifically, we approach the questions of conformance checking (how to assess the conformance of the schedule and the actual process execution) and performance improvement (how to improve the operational process performance). The first question is addressed based on a queueing network for both the schedule and the actual process execution. Based on these models, we detect operational deviations and then apply statistical inference and similarity measures to validate the scheduling assumptions, thereby identifying root-causes for these deviations. These results are the starting point for our technique to improve the operational performance. It suggests adaptations of the scheduling policy of the service process to decrease the tardiness (non-punctuality) and lower the flow time. We demonstrate the value of our approach based on a real-world dataset comprising clinical pathways of an outpatient clinic that have been recorded by a real-time location system (RTLS). Our results indicate that the presented technique enables localization of operational bottlenecks along with their root-causes, while our improvement technique yields a decrease in median tardiness and flow time by more than 20%

    Monotone Precision and Recall Measures for Comparing Executions and Specifications of Dynamic Systems

    Get PDF
    The behavioural comparison of systems is an important concern of software engineering research. For example, the areas of specification discovery and specification mining are concerned with measuring the consistency between a collection of execution traces and a program specification. This problem is also tackled in process mining with the help of measures that describe the quality of a process specification automatically discovered from execution logs. Though various measures have been proposed, it was recently demonstrated that they neither fulfil essential properties, such as monotonicity, nor can they handle infinite behaviour. In this paper, we address this research problem by introducing a new framework for the definition of behavioural quotients. We proof that corresponding quotients guarantee desired properties that existing measures have failed to support. We demonstrate the application of the quotients for capturing precision and recall measures between a collection of recorded executions and a system specification. We use a prototypical implementation of these measures to contrast their monotonic assessment with measures that have been defined in prior research

    Estimating productivity gains in digital automation

    Full text link
    This paper proposes a novel productivity estimation model to evaluate the effects of adopting Artificial Intelligence (AI) components in a production chain. Our model provides evidence to address the "AI's" Solow's Paradox. We provide (i) theoretical and empirical evidence to explain Solow's dichotomy; (ii) a data-driven model to estimate and asses productivity variations; (iii) a methodology underpinned on process mining datasets to determine the business process, BP, and productivity; (iv) a set of computer simulation parameters; (v) and empirical analysis on labour-distribution. These provide data on why we consider AI Solow's paradox a consequence of metric mismeasurement.Comment: 11 pages and 9 figure

    What's next? : operational support for business process execution

    Get PDF
    In the last decade flexibility has become an increasingly important in the area of business process management. Information systems that support the execution of the process are required to work in a dynamic environment that imposes changing demands on the execution of the process. In academia and industry a variety of paradigms and implementations has been developed to support flexibility. While on the one hand these approaches address the industry demands in flexibility, on the other hand, they result in confronting the user with many choices between different alternatives. As a consequence, methods to support users in selecting the best alternative during execution have become essential. In this thesis we introduce a formal framework for providing support to users based on historical evidence available in the execution log of the process. This thesis focuses on support by means of (1) recommendations that provide the user an ordered list of execution alternatives based on estimated utilities and (2) predictions that provide the user general statistics for each execution alternative. Typically, estimations are not an average over all observations, but they are based on observations for "similar" situations. The main question is what similarity means in the context of business process execution. We introduce abstractions on execution traces to capture similarity between execution traces in the log. A trace abstraction considers some trace characteristics rather than the exact trace. Traces that have identical abstraction values are said to be similar. The challenge is to determine those abstractions (characteristics) that are good predictors for the parameter to be estimated in the recommendation or prediction. We analyse the dependency between values of an abstraction and the mean of the parameter to be estimated by means of regression analysis. With regression we obtain a set of abstractions that explain the parameter to be estimated. Dependencies do not only play a role in providing predictions and recommendations to instances at run-time, but they are also essential for simulating the effect of changes in the environment on the processes, both locally and globally. We use stochastic simulation models to simulate the effect of changes in the environment, in particular changed probability distribution caused by recommendations. The novelty of these models is that they include dependencies between abstraction values and simulation parameters, which are estimated from log data. We demonstrate that these models give better approximations of reality than traditional models. A framework for offering operational support has been implemented in the context of the process mining framework ProM

    Channels of Firm Adjustment: Theory and Empirical Evidence

    Get PDF
    We provide a comprehensive analysis of how firms choose between different expansion and contraction forms, unifying existing approaches from the industrial organization and corporate finance literature. Using novel data covering almost the entire universe of UK firms, we document firms� use of internal adjustment, greenfield investment and mergers and acquisitions (M&As). We describe frequency and aggregate importance of the different channels, and show that their use varies systematically with observable firm characteristics, in particular firm size and the magnitude of adjustment. We also demonstrate that there is positive assortative matching on the UK merger market. Based on these facts, we propose a theoretical framework which accommodates all three adjustment channels in a unified setting, and is able to replicate the adjustment and matching patterns found in the data.

    Modelling economies in transition: an introduction

    Get PDF
    This paper considers the implications of structural breaks, such as have occurred in many transition economies, for econometric modelling based on the multivariate cointegration paradigm. It outlines recent developments on the identification of linear cointegrated systems, discusses some practical problems, and presents an extension to non-linear systems. This is followed by a discussion of the impact of structural breaks on the identification and estimation of such systems. Finally, it relates these issues to the other papers in this volume.

    Mining complex structured data: Enhanced methods and applications

    Get PDF
    Conventional approaches to analysing complex business data typically rely on process models, which are difficult to construct and use. This thesis addresses this issue by converting semi-structured event logs to a simpler flat representation without any loss of information, which then enables direct applications of classical data mining methods. The thesis also proposes an effective and scalable classification method which can identify distinct characteristics of a business process for further improvements
    corecore