97 research outputs found

    Advancements and Challenges in Object-Centric Process Mining: A Systematic Literature Review

    Full text link
    Recent years have seen the emergence of object-centric process mining techniques. Born as a response to the limitations of traditional process mining in analyzing event data from prevalent information systems like CRM and ERP, these techniques aim to tackle the deficiency, convergence, and divergence issues seen in traditional event logs. Despite the promise, the adoption in real-world process mining analyses remains limited. This paper embarks on a comprehensive literature review of object-centric process mining, providing insights into the current status of the discipline and its historical trajectory

    Performance-preserving event log sampling for predictive monitoring

    Get PDF
    Predictive process monitoring is a subfield of process mining that aims to estimate case or event features for running process instances. Such predictions are of significant interest to the process stakeholders. However, most of the state-of-the-art methods for predictive monitoring require the training of complex machine learning models, which is often inefficient. Moreover, most of these methods require a hyper-parameter optimization that requires several repetitions of the training process which is not feasible in many real-life applications. In this paper, we propose an instance selection procedure that allows sampling training process instances for prediction models. We show that our instance selection procedure allows for a significant increase of training speed for next activity and remaining time prediction methods while maintaining reliable levels of prediction accuracy

    Studying Software Engineering Patterns for Designing Machine Learning Systems

    Full text link
    Machine-learning (ML) techniques have become popular in the recent years. ML techniques rely on mathematics and on software engineering. Researchers and practitioners studying best practices for designing ML application systems and software to address the software complexity and quality of ML techniques. Such design practices are often formalized as architecture patterns and design patterns by encapsulating reusable solutions to commonly occurring problems within given contexts. However, to the best of our knowledge, there has been no work collecting, classifying, and discussing these software-engineering (SE) design patterns for ML techniques systematically. Thus, we set out to collect good/bad SE design patterns for ML techniques to provide developers with a comprehensive and ordered classification of such patterns. We report here preliminary results of a systematic-literature review (SLR) of good/bad design patterns for ML

    Partial-order-based process mining: a survey and outlook

    Get PDF
    The field of process mining focuses on distilling knowledge of the (historical) execution of a process based on the operational event data generated and stored during its execution. Most existing process mining techniques assume that the event data describe activity executions as degenerate time intervals, i.e., intervals of the form [t, t], yielding a strict total order on the observed activity instances. However, for various practical use cases, e.g., the logging of activity executions with a nonzero duration and uncertainty on the correctness of the recorded timestamps of the activity executions, assuming a partial order on the observed activity instances is more appropriate. Using partial orders to represent process executions, i.e., based on recorded event data, allows for new classes of process mining algorithms, i.e., aware of parallelism and robust to uncertainty. Yet, interestingly, only a limited number of studies consider using intermediate data abstractions that explicitly assume a partial order over a collection of observed activity instances. Considering recent developments in process mining, e.g., the prevalence of high-quality event data and techniques for event data abstraction, the need for algorithms designed to handle partially ordered event data is expected to grow in the upcoming years. Therefore, this paper presents a survey of process mining techniques that explicitly use partial orders to represent recorded process behavior. We performed a keyword search, followed by a snowball sampling strategy, yielding 68 relevant articles in the field. We observe a recent uptake in works covering partial-order-based process mining, e.g., due to the current trend of process mining based on uncertain event data. Furthermore, we outline promising novel research directions for the use of partial orders in the context of process mining algorithms

    Digital 3D Technologies for Humanities Research and Education: An Overview

    Get PDF
    Digital 3D modelling and visualization technologies have been widely applied to support research in the humanities since the 1980s. Since technological backgrounds, project opportunities, and methodological considerations for application are widely discussed in the literature, one of the next tasks is to validate these techniques within a wider scientific community and establish them in the culture of academic disciplines. This article resulted from a postdoctoral thesis and is intended to provide a comprehensive overview on the use of digital 3D technologies in the humanities with regards to (1) scenarios, user communities, and epistemic challenges; (2) technologies, UX design, and workflows; and (3) framework conditions as legislation, infrastructures, and teaching programs. Although the results are of relevance for 3D modelling in all humanities disciplines, the focus of our studies is on modelling of past architectural and cultural landscape objects via interpretative 3D reconstruction methods

    Koostööäriprotsesside läbiviimine plokiahelal: süsteem

    Get PDF
    Tänapäeval peavad organisatsioonid tegema omavahel koostööd, et kasutada ära üksteise täiendavaid võimekusi ning seeläbi pakkuda oma klientidele parimaid tooteid ja teenuseid. Selleks peavad organisatsioonid juhtima äriprotsesse, mis ületavad nende organisatsioonilisi piire. Selliseid protsesse nimetatakse koostööäriprotsessideks. Üks peamisi takistusi koostööäriprotsesside elluviimisel on osapooltevahelise usalduse puudumine. Plokiahel loob detsentraliseeritud pearaamatu, mida ei saa võltsida ning mis toetab nutikate lepingute täitmist. Nii on võimalik teha koostööd ebausaldusväärsete osapoolte vahel ilma kesksele asutusele tuginemata. Paraku on aga äriprotsesside läbiviimine selliseid madala taseme plokiahela elemente kasutades tülikas, veaohtlik ja erioskusi nõudev. Seevastu juba väljakujunenud äriprotsesside juhtimissüsteemid (Business Process Management System – BPMS) pakuvad käepäraseid abstraheeringuid protsessidele orienteeritud rakenduste kiireks arendamiseks. Käesolev doktoritöö käsitleb koostööäriprotsesside automatiseeritud läbiviimist plokiahela tehnoloogiat kasutades, kombineerides traditsioonliste BPMS- ide arendusvõimalused plokiahelast tuleneva suurendatud usaldusega. Samuti käsitleb antud doktoritöö küsimust, kuidas pakkuda tuge olukordades, milles uued osapooled võivad jooksvalt protsessiga liituda, mistõttu on vajalik tagada paindlikkus äriprotsessi marsruutimisloogika muutmise osas. Doktoritöö uurib tarkvaraarhitektuurilisi lähenemisviise ja modelleerimise kontseptsioone, pakkudes välja disainipõhimõtteid ja nõudeid, mida rakendatakse uudsel plokiahela baasil loodud äriprotsessi juhtimissüsteemil CATERPILLAR. CATERPILLAR-i süsteem toetab kahte lähenemist plokiahelal põhinevate protsesside rakendamiseks, läbiviimiseks ja seireks: kompileeritud ja tõlgendatatud. Samuti toetab see kahte kontrollitud paindlikkuse mehhanismi, mille abil saavad protsessis osalejad ühiselt otsustada, kuidas protsessi selle täitmise ajal uuendada ning anda ja eemaldada osaliste juurdepääsuõigusi.Nowadays, organizations are pressed to collaborate in order to take advantage of their complementary capabilities and to provide best-of-breed products and services to their customers. To do so, organizations need to manage business processes that span beyond their organizational boundaries. Such processes are called collaborative business processes. One of the main roadblocks to implementing collaborative business processes is the lack of trust between the participants. Blockchain provides a decentralized ledger that cannot be tamper with, that supports the execution of programs called smart contracts. These features allow executing collaborative processes between untrusted parties and without relying on a central authority. However, implementing collaborative business processes in blockchain can be cumbersome, error-prone and requires specialized skills. In contrast, established Business Process Management Systems (BPMSs) provide convenient abstractions for rapid development of process-oriented applications. This thesis addresses the problem of automating the execution of collaborative business processes on top of blockchain technology in a way that takes advantage of the trust-enhancing capabilities of this technology while offering the development convenience of traditional BPMSs. The thesis also addresses the question of how to support scenarios in which new parties may be onboarded at runtime, and in which parties need to have the flexibility to change the default routing logic of the business process. We explore architectural approaches and modelling concepts, formulating design principles and requirements that are implemented in a novel blockchain-based BPMS named CATERPILLAR. The CATERPILLAR system supports two methods to implement, execute and monitor blockchain-based processes: compiled and interpreted. It also supports two mechanisms for controlled flexibility; i.e., participants can collectively decide on updating the process during its execution as well as granting and revoking access to parties.https://www.ester.ee/record=b536494

    Prototype Selection using Clustering and Conformance Metrics for Process Discovery

    Get PDF
    International audienceAutomated process discovery algorithms aim to automatically create process models based on event data that is captured during the execution of business processes. These algorithms usually tend to use all of the event data to discover a process model. Using all (i.e., less common) behavior may lead to discover imprecise and/or complex process models that may conceal important information of processes. In this paper, we introduce a new incremental prototype selection algorithm based on the clustering of process instances to address this problem. The method iteratively computes a unique process model from a different set of selected prototypes that are representative of whole event data and stops when conformance metrics decrease. This method has been implemented using both ProM and RapidProM. We applied the proposed method on several real event datasets with state-of-the-art process discovery algorithms. Results show that using the proposed method leads to improve the general quality of discovered process models
    corecore