868 research outputs found

    Heuristic Approaches for Generating Local Process Models through Log Projections

    Full text link
    Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i.e. subsets of possible events are taken into account to create so-called local process models. Often such smaller models provide valuable insights into the behavior of the process, especially when no adequate and comprehensible single overall process model exists that is able to describe the traces of the process from start to end. The practical application of LPM discovery is however hindered by computational issues in the case of logs with many activities (problems may already occur when there are more than 17 unique activities). In this paper, we explore three heuristics to discover subsets of activities that lead to useful log projections with the goal of speeding up LPM discovery considerably while still finding high-quality LPMs. We found that a Markov clustering approach to create projection sets results in the largest improvement of execution time, with discovered LPMs still being better than with the use of randomly generated activity sets of the same size. Another heuristic, based on log entropy, yields a more moderate speedup, but enables the discovery of higher quality LPMs. The third heuristic, based on the relative information gain, shows unstable performance: for some data sets the speedup and LPM quality are higher than with the log entropy based method, while for other data sets there is no speedup at all.Comment: paper accepted and to appear in the proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM), special session on Process Mining, part of the Symposium Series on Computational Intelligence (SSCI

    Evaluation and extracting factual software architecture of distributed system by process mining techniques

    Get PDF
    The factual software architectures that are actually implemented of distributed systems do not conform the planned software architectures (Beck 2010). It happens due to the complexity of distributed systems. This problem begets two main challenges; First, how to extract the factual software architectures with the proper techniques and second, how to compare the planned software architecture with the extracted factual architecture. This study aims to use process mining to discover factual software architecture from codes and represents software architecture model in Petri Net to evaluate model by the linear temporal logic and process mining. In this paper, the applicability of process mining techniques, implemented in the ProM6.7 framework is shown to extract and evaluate factual software architectures. Furthermore, capabilities of Hierarchical Colored Petri Net implemented in CPN4.0 are exploited to model and simulate software architectures. The proposed approach has been conducted on a case study to indicate applicability of the approach in the distributed data base system. The final result of the case study indicates process mining is able to extract factual software architectures and also to check its conformance

    Safety verification of asynchronous pushdown systems with shaped stacks

    Full text link
    In this paper, we study the program-point reachability problem of concurrent pushdown systems that communicate via unbounded and unordered message buffers. Our goal is to relax the common restriction that messages can only be retrieved by a pushdown process when its stack is empty. We use the notion of partially commutative context-free grammars to describe a new class of asynchronously communicating pushdown systems with a mild shape constraint on the stacks for which the program-point coverability problem remains decidable. Stacks that fit the shape constraint may reach arbitrary heights; further a process may execute any communication action (be it process creation, message send or retrieval) whether or not its stack is empty. This class extends previous computational models studied in the context of asynchronous programs, and enables the safety verification of a large class of message passing programs

    Modelling Distributed Cognition Systems in PVS

    Get PDF
    We report on our efforts to formalise DiCoT, an informal structured approach for analysing complex work systems, such as hospital and day care units, as distributed cognition systems. We focus on DiCoT's information flow model, which describes how information is transformed and propagated in the system. Our contribution is a set of generic models for the specification and verification system PVS. The developed models can be directly mapped to the informal descriptions adopted by human-computer interactions experts. The models can be verified against properties of interest in the PVS theorem prover. Also, the same models can be simulated, thus facilitating analysts to engage with stakeholders when checking the correctness of the model. We trial our ideas on a case study based on a real-world medical system

    Discrete-Event Simulation versus Constrained Graphic Modelling of Construction Processes

    Full text link
    Effective construction project planning and control requires the development of a model of the project’s construction processes.  The Critical Path Method (CPM) is the most popular project modelling method in construction since it is relatively simple to use and reasonably versatile in terms of the range of processes it can represent.  Several other modelling techniques have been developed over the years, each with their own advantages and disadvantages.  Linear scheduling, for example, has been designed to provide highly insightful visual representations of a construction process, but unfortunately is largely incapable of representing non-repetitive construction work.  Discrete-event simulation is generally agreed to be the most versatile of all modelling methods, but it lacks the simplicity in use of CPM and so has not been widely adopted in construction.  A new graphical constraint-based method of modelling construction processes, Foresight, has been developed with the goal of offering the simplicity in use of CPM, the visual insight of linear scheduling, and the versatility of simulation.  Earlier work has demonstrated the modelling versatility of Foresight.  As part of a continuing study, this paper focuses on a comparison of the Foresight approach with discrete-event construction simulation methods, specifically Stroboscope (a derivative of CYCLONE). Foresight is shown to outperform Stroboscope in terms of the simplicity of the resultant models for a series of case studies involving a number of variants of an earthmoving operation and of a sewer tunnelling operation.  A qualitative comparison of the two approaches also highlights the superior visual insight provided by Foresight over conventional simulation, an attribute essential to both the effective verification and optimization of a model

    Connector algebras for C/E and P/T nets interactions

    Get PDF
    A quite fourishing research thread in the recent literature on component based system is concerned with the algebraic properties of different classes of connectors. In a recent paper, an algebra of stateless connectors was presented that consists of five kinds of basic connectors, namely symmetry, synchronization, mutual exclusion, hiding and inaction, plus their duals and it was shown how they can be freely composed in series and in parallel to model sophisticated "glues". In this paper we explore the expressiveness of stateful connectors obtained by adding one-place buffers or unbounded buffers to the stateless connectors. The main results are: i) we show how different classes of connectors exactly correspond to suitable classes of Petri nets equipped with compositional interfaces, called nets with boundaries; ii) we show that the difference between strong and weak semantics in stateful connectors is reflected in the semantics of nets with boundaries by moving from the classic step semantics (strong case) to a novel banking semantics (weak case), where a step can be executed by taking some "debit" tokens to be given back during the same step; iii) we show that the corresponding bisimilarities are congruences (w.r.t. composition of connectors in series and in parallel); iv) we show that suitable monoidality laws, like those arising when representing stateful connectors in the tile model, can nicely capture concurrency aspects; and v) as a side result, we provide a basic algebra, with a finite set of symbols, out of which we can compose all P/T nets, fulfilling a long standing quest

    SAFE-FLOW : a systematic approach for safety analysis of clinical workflows

    Get PDF
    The increasing use of technology in delivering clinical services brings substantial benefits to the healthcare industry. At the same time, it introduces potential new complications to clinical workflows that generate new risks and hazards with the potential to affect patients’ safety. These workflows are safety critical and can have a damaging impact on all the involved parties if they fail.Due to the large number of processes included in the delivery of a clinical service, it can be difficult to determine the individuals or the processes that are responsible for adverse events. Using methodological approaches and automated tools to carry out an analysis of the workflow can help in determining the origins of potential adverse events and consequently help in avoiding preventable errors. There is a scarcity of studies addressing this problem; this was a partial motivation for this thesis.The main aim of the research is to demonstrate the potential value of computer science based dependability approaches to healthcare and in particular, the appropriateness and benefits of these dependability approaches to overall clinical workflows. A particular focus is to show that model-based safety analysis techniques can be usefully applied to such areas and then to evaluate this application.This thesis develops the SAFE-FLOW approach for safety analysis of clinical workflows in order to establish the relevance of such application. SAFE-FLOW detailed steps and guidelines for its application are explained. Then, SAFE-FLOW is applied to a case study and is systematically evaluated. The proposed evaluation design provides a generic evaluation strategy that can be used to evaluate the adoption of safety analysis methods in healthcare.It is concluded that safety of clinical workflows can be significantly improved by performing safety analysis on workflow models. The evaluation results show that SAFE-FLOW is feasible and it has the potential to provide various benefits; it provides a mechanism for a systematic identification of both adverse events and safeguards, which is helpful in terms of identifying the causes of possible adverse events before they happen and can assist in the design of workflows to avoid such occurrences. The clear definition of the workflow including its processes and tasks provides a valuable opportunity for formulation of safety improvement strategies

    Analysis and design development of parallel 3-D mesh refinement algorithms for finite element electromagnetics with tetrahedra

    Get PDF
    Optimal partitioning of three-dimensional (3-D) mesh applications necessitates dynamically determining and optimizing for the most time-inhibiting factors, such as load imbalance and communication volume. One challenge is to create an analytical model where the programmer can focus on optimizing load imbalance or communication volume to reduce execution time. Another challenge is the best individual performance of a specific mesh refinement demands precise study and the selection of the suitable computation strategy. Very-large-scale finite element method (FEM) applications require sophisticated capabilities for using the underlying parallel computer's resources in the most efficient way. Thus, classifying these requirements in a manner that conforms to the programmer is crucial.This thesis contributes a simulation-based approach for the algorithm analysis and design of parallel, 3-D FEM mesh refinement that utilizes Petri Nets (PN) as the modeling and simulation tool. PN models are implemented based on detailed software prototypes and system architectures, which imitate the behaviour of the parallel meshing process. Subsequently, estimates for performance measures are derived from discrete event simulations. New communication strategies are contributed in the thesis for parallel mesh refinement that pipeline the computation and communication time by means of the workload prediction approach and task breaking point approach. To examine the performance of these new designs, PN models are created for modeling and simulating each of them and their efficiencies are justified by the simulation results. Also based on the PN modeling approach, the performance of a Random Polling Dynamic Load Balancing protocol has been examined. Finally, the PN models are validated by a MPI benchmarking program running on the real multiprocessor system. The advantages of new pipelined communication designs as well as the benefits of PN approach for evaluating and developing high performance parallel mesh refinement algorithms are demonstrated

    Adapting Game Mechanics with Micro-Machinations

    Get PDF
    In early game development phases game designers adjust game rules in a rapid, iterative and flexible way. In later phases, when software prototypes are available, play testing provides more detailed feedback about player experience. More often than not, the realized and the intended gameplay emerging from game software differ. Unfortunately, adjusting it is hard because designers lack a means for efficiently defining, fine-tuning and balancing game mechanics. The language Machinations provides a graphical notation for expressing the rules of game economies that fits with a designer’s understanding and vocabulary, but is limited to design itself. Micro-Mach

    Evaluating Resilience of Cyber-Physical-Social Systems

    Get PDF
    Nowadays, protecting the network is not the only security concern. Still, in cyber security, websites and servers are becoming more popular as targets due to the ease with which they can be accessed when compared to communication networks. Another threat in cyber physical social systems with human interactions is that they can be attacked and manipulated not only by technical hacking through networks, but also by manipulating people and stealing users’ credentials. Therefore, systems should be evaluated beyond cy- ber security, which means measuring their resilience as a piece of evidence that a system works properly under cyber-attacks or incidents. In that way, cyber resilience is increas- ingly discussed and described as the capacity of a system to maintain state awareness for detecting cyber-attacks. All the tasks for making a system resilient should proactively maintain a safe level of operational normalcy through rapid system reconfiguration to detect attacks that would impact system performance. In this work, we broadly studied a new paradigm of cyber physical social systems and defined a uniform definition of it. To overcome the complexity of evaluating cyber resilience, especially in these inhomo- geneous systems, we proposed a framework including applying Attack Tree refinements and Hierarchical Timed Coloured Petri Nets to model intruder and defender behaviors and evaluate the impact of each action on the behavior and performance of the system.Hoje em dia, proteger a rede não é a única preocupação de segurança. Ainda assim, na segurança cibernética, sites e servidores estão se tornando mais populares como alvos devido à facilidade com que podem ser acessados quando comparados às redes de comu- nicação. Outra ameaça em sistemas sociais ciberfisicos com interações humanas é que eles podem ser atacados e manipulados não apenas por hackers técnicos através de redes, mas também pela manipulação de pessoas e roubo de credenciais de utilizadores. Portanto, os sistemas devem ser avaliados para além da segurança cibernética, o que significa medir sua resiliência como uma evidência de que um sistema funciona adequadamente sob ataques ou incidentes cibernéticos. Dessa forma, a resiliência cibernética é cada vez mais discutida e descrita como a capacidade de um sistema manter a consciência do estado para detectar ataques cibernéticos. Todas as tarefas para tornar um sistema resiliente devem manter proativamente um nível seguro de normalidade operacional por meio da reconfi- guração rápida do sistema para detectar ataques que afetariam o desempenho do sistema. Neste trabalho, um novo paradigma de sistemas sociais ciberfisicos é amplamente estu- dado e uma definição uniforme é proposta. Para superar a complexidade de avaliar a resiliência cibernética, especialmente nesses sistemas não homogéneos, é proposta uma estrutura que inclui a aplicação de refinamentos de Árvores de Ataque e Redes de Petri Coloridas Temporizadas Hierárquicas para modelar comportamentos de invasores e de- fensores e avaliar o impacto de cada ação no comportamento e desempenho do sistema
    corecore