54 research outputs found

    Representing Resources in Petri Net Models: Hardwiring or Soft-coding?

    Get PDF
    ©2011 IEEE. Reprinted, with permission, from : Reggie Davidrajuh; Representing Resources in Petri Net Models : Hardwiring or Soft-coding?, 2011 IEEE International Conference on Service Operations, Logistics, and Informatics (SOLI), 2011; Beijing, China. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Stavanger's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs‐[email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.This paper presents an interesting design problem in developing a new tool for discrete-event dynamic systems (DEDS). A new tool known as GPenSIM was developed for modeling and simulation of DEDS; GPenSIM is based on Petri Nets. The design issue this paper talks about is whether to represent resources in DEDS hardwired as a part of the Petri net structure (which is the widespread practice) or to soft code as common variables in the program code. This paper shows that soft coding resources give benefits such as simpler and skinny models

    Efficient Detection on Stochastic Faults in PLC Based Automated Assembly Systems With Novel Sensor Deployment and Diagnoser Design

    Get PDF
    In this dissertation, we proposed solutions on novel sensor deployment and diagnoser design to efficiently detect stochastic faults in PLC based automated systems First, a fuzzy quantitative graph based sensor deployment was called upon to model cause-effect relationship between faults and sensors. Analytic hierarchy process (AHP) was used to aggregate the heterogeneous properties between sensors and faults into single edge values in fuzzy graph, thus quantitatively determining the fault detectability. An appropriate multiple objective model was set up to minimize fault unobservability and cost while achieving required detectability performance. Lexicographical mixed integer linear programming and greedy search were respectively used to optimize the model, thus assigning the sensors to faults. Second, a diagnoser based on real time fuzzy Petri net (RTFPN) was proposed to detect faults in discrete manufacturing systems. It used the real time PN to model the manufacturing plant while using fuzzy PN to isolate the faults. It has the capability of handling uncertainties and including industry knowledge to diagnose faults. The proposed approach was implemented using Visual Basic, and tested as well as validated on a dual robot arm. Finally, the proposed sensor deployment approach and diagnoser were comprehensively evaluated based on design of experiment techniques. Two-stage statistical analysis including analysis of variance (ANOVA) and least significance difference (LSD) were conducted to evaluate the diagnosis performance including positive detection rate, false alarm, accuracy and detect delay. It illustrated the proposed approaches have better performance on those evaluation metrics. The major contributions of this research include the following aspects: (1) a novel fuzzy quantitative graph based sensor deployment approach handling sensor heterogeneity, and optimizing multiple objectives based on lexicographical integer linear programming and greedy algorithm, respectively. A case study on a five tank system showed that system detectability was improved from the approach of signed directed graph's 0.62 to the proposed approach's 0.70. The other case study on a dual robot arm also show improvement on system's detectability improved from the approach of signed directed graph's 0.61 to the proposed approach's 0.65. (2) A novel real time fuzzy Petri net diagnoser was used to remedy nonsynchronization and integrate useful but incomplete knowledge for diagnosis purpose. The third case study on a dual robot arm shows that the diagnoser can achieve a high detection accuracy of 93% and maximum detection delay of eight steps. (3) The comprehensive evaluation approach can be referenced by other diagnosis systems' design, optimization and evaluation

    Process mining : conformance and extension

    Get PDF
    Today’s business processes are realized by a complex sequence of tasks that are performed throughout an organization, often involving people from different departments and multiple IT systems. For example, an insurance company has a process to handle insurance claims for their clients, and a hospital has processes to diagnose and treat patients. Because there are many activities performed by different people throughout the organization, there is a lack of transparency about how exactly these processes are executed. However, understanding the process reality (the "as is" process) is the first necessary step to save cost, increase quality, or ensure compliance. The field of process mining aims to assist in creating process transparency by automatically analyzing processes based on existing IT data. Most processes are supported by IT systems nowadays. For example, Enterprise Resource Planning (ERP) systems such as SAP log all transaction information, and Customer Relationship Management (CRM) systems are used to keep track of all interactions with customers. Process mining techniques use these low-level log data (so-called event logs) to automatically generate process maps that visualize the process reality from different perspectives. For example, it is possible to automatically create process models that describe the causal dependencies between activities in the process. So far, process mining research has mostly focused on the discovery aspect (i.e., the extraction of models from event logs). This dissertation broadens the field of process mining to include the aspect of conformance and extension. Conformance aims at the detection of deviations from documented procedures by comparing the real process (as recorded in the event log) with an existing model that describes the assumed or intended process. Conformance is relevant for two reasons: 1. Most organizations document their processes in some form. For example, process models are created manually to understand and improve the process, comply with regulations, or for certification purposes. In the presence of existing models, it is often more important to point out the deviations from these existing models than to discover completely new models. Discrepancies emerge because business processes change, or because the models did not accurately reflect the real process in the first place (due to the manual and subjective creation of these models). If the existing models do not correspond to the actual processes, then they have little value. 2. Automatically discovered process models typically do not completely "fit" the event logs from which they were created. These discrepancies are due to noise and/or limitations of the used discovery techniques. Furthermore, in the context of complex and diverse process environments the discovered models often need to be simplified to obtain useful insights. Therefore, it is crucial to be able to check how much a discovered process model actually represents the real process. Conformance techniques can be used to quantify the representativeness of a mined model before drawing further conclusions. They thus constitute an important quality measurement to effectively use process discovery techniques in a practical setting. Once one is confident in the quality of an existing or discovered model, extension aims at the enrichment of these models by the integration of additional characteristics such as time, cost, or resource utilization. By extracting aditional information from an event log and projecting it onto an existing model, bottlenecks can be highlighted and correlations with other process perspectives can be identified. Such an integrated view on the process is needed to understand root causes for potential problems and actually make process improvements. Furthermore, extension techniques can be used to create integrated simulation models from event logs that resemble the real process more closely than manually created simulation models. In Part II of this thesis, we provide a comprehensive framework for the conformance checking of process models. First, we identify the evaluation dimensions fitness, decision/generalization, and structure as the relevant conformance dimensions.We develop several Petri-net based approaches to measure conformance in these dimensions and describe five case studies in which we successfully applied these conformance checking techniques to real and artificial examples. Furthermore, we provide a detailed literature review of related conformance measurement approaches (Chapter 4). Then, we study existing model evaluation approaches from the field of data mining. We develop three data mining-inspired evaluation approaches for discovered process models, one based on Cross Validation (CV), one based on the Minimal Description Length (MDL) principle, and one using methods based on Hidden Markov Models (HMMs). We conclude that process model evaluation faces similar yet different challenges compared to traditional data mining. Additional challenges emerge from the sequential nature of the data and the higher-level process models, which include concurrent dynamic behavior (Chapter 5). Finally, we point out current shortcomings and identify general challenges for conformance checking techniques. These challenges relate to the applicability of the conformance metric, the metric quality, and the bridging of different process modeling languages. We develop a flexible, language-independent conformance checking approach that provides a starting point to effectively address these challenges (Chapter 6). In Part III, we develop a concrete extension approach, provide a general model for process extensions, and apply our approach for the creation of simulation models. First, we develop a Petri-net based decision mining approach that aims at the discovery of decision rules at process choice points based on data attributes in the event log. While we leverage classification techniques from the data mining domain to actually infer the rules, we identify the challenges that relate to the initial formulation of the learning problem from a process perspective. We develop a simple approach to partially overcome these challenges, and we apply it in a case study (Chapter 7). Then, we develop a general model for process extensions to create integrated models including process, data, time, and resource perspective.We develop a concrete representation based on Coloured Petri-nets (CPNs) to implement and deploy this model for simulation purposes (Chapter 8). Finally, we evaluate the quality of automatically discovered simulation models in two case studies and extend our approach to allow for operational decision making by incorporating the current process state as a non-empty starting point in the simulation (Chapter 9). Chapter 10 concludes this thesis with a detailed summary of the contributions and a list of limitations and future challenges. The work presented in this dissertation is supported and accompanied by concrete implementations, which have been integrated in the ProM and ProMimport frameworks. Appendix A provides a comprehensive overview about the functionality of the developed software. The results presented in this dissertation have been presented in more than twenty peer-reviewed scientific publications, including several high-quality journals

    Modeling humanoid swarm robots with petri nets

    Get PDF
    Master's thesis in Computer scienceRobots have become a hot topic in today‟s electronic world. There are many definitions for it. One of the definition in Oxford dictionary states “a robot is a machine capable for carrying out a complex series of action automatically especially one programmable by a computer”. This paper deals with a special kind of robot, which is also known as humanoid robot. These robots are replication of human beings with head, torso, arms and legs. A model of human is presented in this paper as discrete event system adapted from “Modeling and simulating motions of human bodies
”[1]. This model consists of sixteen interrelated limbs defined in 3D space, so most limbs/joints are able to make movement in three different angles (α, ÎČ and Îł). Full details regarding Range of Motion (ROM) of rigid body in forward kinematic is illustrated. Human motions are categorized into two types: stochastic and deterministic motions. Deterministic motions are demonstrated using gait cycle of walking and running of normal adult person. The main focus of this paper is to model and simulate humanoid robot represented as Discrete Event Systems (DES); in Petri Net using GPenSIM and later expand those group of robots to swarm setting. GPenSIM is General Purpose Petri Net simulator [2] developed as toolbox for MATLAB to model and simulated discrete events using Petri net tools. Each joint‟s angle is treated as a separate Petri Net model which is independent from each other and their movement‟s limits are defined by ROM of normal human body. The instructions relating to the motion of joints for simulation are fed through a file to the instructor. These movements of joints are represented by variation of tokens displayed at the end of simulation in a graphical figure. Further, same structure of model is used in swarm of robots. Instead of feeding instructions to individual robots, a central instructor is created. This instructor acts as a master to robots acting as slaves where slaves include some predetermined commands embedded inside them. With central command system, a proper synchronization is achieved among group of robots working as swarm. A normal routine of group dance or simple group sport can be accomplished with calculated instructions on this swarm of robot

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    A review of cyber security risk assessment methods for SCADA systems

    Get PDF
    This paper reviews the state of the art in cyber security risk assessment of Supervisory Control and Data Acquisition (SCADA) systems. We select and in-detail examine twenty-four risk assessment methods developed for or applied in the context of a SCADA system. We describe the essence of the methods and then analyse them in terms of aim; application domain; the stages of risk management addressed; key risk management concepts covered; impact measurement; sources of probabilistic data; evaluation and tool support. Based on the analysis, we suggest an intuitive scheme for the categorisation of cyber security risk assessment methods for SCADA systems. We also outline five research challenges facing the domain and point out the approaches that might be taken

    Modelling bacterial regulatory networks with Petri nets

    Get PDF
    To exploit the vast data obtained from high throughput molecular biology, a variety of modelling and analysis techniques must be fully utilised. In this thesis, Petri nets are investigated within the context of computational systems biology, with the specific focus of facilitating the creation and analysis of models of biological pathways. The analysis of qualitative models of genetic networks using safe Petri net techniques was investigated with particular reference to model checking. To exploit existing model repositories a mapping was presented for the automatic translation of models encoded in the Systems Biology Markup Language (SBML) into the Petri Net framework. The mapping is demonstrated via the conversion and invariant analysis of two published models of the glycolysis pathway. Dynamic stochastic simulations of biological systems suffer from two problems: computational cost; and lack of kinetic parameters. A new stochastic Petri net simulation tool, NASTY was developed which addresses the prohibitive real-time computational costs of simulations by using distributed job scheduling. In order to manage and maximise the usefulness of simulation results a new data standard, TSML was presented. The computational power of NASTY provided the basis for the development of a genetic algorithm for the automatic parameterisation of stochastic models. This parameter estimation technique was evaluated on a published model of the general stress response of E. coli. An attempt to enhance the parameter estimation process using sensitivity analysis was then investigated. To explore the scope and limits of applying the Petri net techniques presented, a realistic case study investigated how the Pho and aB regulons interact to mitigate phosphate stress in Bacillus subtilis. This study made use of a combination of qualitative and quantitative Petri net techniques and was able to confirm an existing experimental hypothesis.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Modelling bacterial regulatory networks with Petri nets

    Get PDF
    To exploit the vast data obtained from high throughput molecular biology, a variety of modelling and analysis techniques must be fully utilised. In this thesis, Petri nets are investigated within the context of computational systems biology, with the specific focus of facilitating the creation and analysis of models of biological pathways. The analysis of qualitative models of genetic networks using safe Petri net techniques was investigated with particular reference to model checking. To exploit existing model repositories a mapping was presented for the automatic translation of models encoded in the Systems Biology Markup Language (SBML) into the Petri Net framework. The mapping is demonstrated via the conversion and invariant analysis of two published models of the glycolysis pathway. Dynamic stochastic simulations of biological systems suffer from two problems: computational cost; and lack of kinetic parameters. A new stochastic Petri net simulation tool, NASTY was developed which addresses the prohibitive real-time computational costs of simulations by using distributed job scheduling. In order to manage and maximise the usefulness of simulation results a new data standard, TSML was presented. The computational power of NASTY provided the basis for the development of a genetic algorithm for the automatic parameterisation of stochastic models. This parameter estimation technique was evaluated on a published model of the general stress response of E. coli. An attempt to enhance the parameter estimation process using sensitivity analysis was then investigated. To explore the scope and limits of applying the Petri net techniques presented, a realistic case study investigated how the Pho and aB regulons interact to mitigate phosphate stress in Bacillus subtilis. This study made use of a combination of qualitative and quantitative Petri net techniques and was able to confirm an existing experimental hypothesis.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • 

    corecore