270 research outputs found

    The multiple facilitator:Scientists, sages and rascals

    Get PDF
    Background. Games are designed to help participants think about, understand, sharpen their problem statement as well as the specific objectives to be achieved to escape the problem situation. When participants prepare for the game (briefing), interact in the simulated environment (gameplay), and self- or jointly reflect about the gameplay they faced in terms of intended and unintended learning experiences (debrief), they benefit or suffer from facilitating that can or cannot fully cater to their needs. To support the participants to explore and resolve the problem situation in order to achieve learning goals, we propose that facilitators can make use of role shifts during gameplay. Method. To capture the role shifts in the gameplay phase we studied game runs of the MicroTech game. The MicroTech game is a free-form game in which participants play the role of top management team or division managers in a multiunit organization. Results. We analyzed the role shifts we experienced as facilitators by elaborating on game events and how we could manage those events differently in future game runs if necessary. We show a need for facilitators to be able to embody multiple roles in the case of policy gaming that are in fit with the different phases, while there is a simultaneous need to shift within phases in order to keep participants moving and stimulating them to work towards the learning goals. Conclusion. Gaming/simulation facilitators should explore what multiplicity is required of them to make the game a success. Although this may seem normal practice to well-prepared and professionally trained facilitators, this may be particularly important for novice facilitators

    Over de waarde van een gebruikersonderzoek

    Get PDF
    Het opzetten en uitvoeren van een gebruikersonderzoek blijft altijd spannend. Hoe pak je zoiets aan en wegen de inspanningen en kosten wel op tegen de resultaten? En wat doe je vervolgens met die resultaten? Janneke van Zelst en Gusta Drenthe evalueren het gebruikersonderzoek in de vorm van een websurvey zoals dat recent werd uitgevoerd bij de universiteitsbibliotheek van de Erasmus Universiteit Rotterda

    Handling Big(ger) logs: Connecting ProM 6 to apache hadoop

    Get PDF
    Within process mining the main goal is to support the analysis, im- provement and apprehension of business processes. Numerous process mining techniques have been developed with that purpose. The majority of these tech- niques use conventional computation models and do not apply novel scalable and distributed techniques. In this paper we present an integrative framework connect- ing the process mining framework ProM with the distributed computing environ- ment Apache Hadoop. The integration allows for the execution of MapReduce jobs on any Apache Hadoop cluster enabling practitioners and researchers to ex- plore and develop scalable and distributed process mining approaches. Thus, the new approach enables the application of different process mining techniques to events logs of several hundreds of gigabytes

    Repairing Alignments of Process Models

    Get PDF
    Process mining represents a collection of data driven techniques that support the analysis, understanding and improvement of business processes. A core branch of process mining is conformance checking, i.e., assessing to what extent a business process model conforms to observed business process execution data. Alignments are the de facto standard instrument to compute such conformance statistics. However, computing alignments is a combinatorial problem and hence extremely costly. At the same time, many process models share a similar structure and/or a great deal of behavior. For collections of such models, computing alignments from scratch is inefficient, since large parts of the alignments are likely to be the same. This paper presents a technique that exploits process model similarity and repairs existing alignments by updating those parts that do not fit a given process model. The technique effectively reduces the size of the combinatorial alignment problem, and hence decreases computation time significantly. Moreover, the potential loss of optimality is limited and stays within acceptable bounds

    Partial-order-based process mining: a survey and outlook

    Get PDF
    The field of process mining focuses on distilling knowledge of the (historical) execution of a process based on the operational event data generated and stored during its execution. Most existing process mining techniques assume that the event data describe activity executions as degenerate time intervals, i.e., intervals of the form [t, t], yielding a strict total order on the observed activity instances. However, for various practical use cases, e.g., the logging of activity executions with a nonzero duration and uncertainty on the correctness of the recorded timestamps of the activity executions, assuming a partial order on the observed activity instances is more appropriate. Using partial orders to represent process executions, i.e., based on recorded event data, allows for new classes of process mining algorithms, i.e., aware of parallelism and robust to uncertainty. Yet, interestingly, only a limited number of studies consider using intermediate data abstractions that explicitly assume a partial order over a collection of observed activity instances. Considering recent developments in process mining, e.g., the prevalence of high-quality event data and techniques for event data abstraction, the need for algorithms designed to handle partially ordered event data is expected to grow in the upcoming years. Therefore, this paper presents a survey of process mining techniques that explicitly use partial orders to represent recorded process behavior. We performed a keyword search, followed by a snowball sampling strategy, yielding 68 relevant articles in the field. We observe a recent uptake in works covering partial-order-based process mining, e.g., due to the current trend of process mining based on uncertain event data. Furthermore, we outline promising novel research directions for the use of partial orders in the context of process mining algorithms

    Event Log Sampling for Predictive Monitoring

    Get PDF
    Predictive process monitoring is a subfield of process mining that aims to estimate case or event features for running process instances. Such predictions are of significant interest to the process stakeholders. However, state-of-the-art methods for predictive monitoring require the training of complex machine learning models, which is often inefficient. This paper proposes an instance selection procedure that allows sampling training process instances for prediction models. We show that our sampling method allows for a significant increase of training speed for next activity prediction methods while maintaining reliable levels of prediction accuracy.Comment: 7 pages, 1 figure, 4 tables, 34 reference
    corecore