38 research outputs found

    Production Scheduling

    Get PDF
    Generally speaking, scheduling is the procedure of mapping a set of tasks or jobs (studied objects) to a set of target resources efficiently. More specifically, as a part of a larger planning and scheduling process, production scheduling is essential for the proper functioning of a manufacturing enterprise. This book presents ten chapters divided into five sections. Section 1 discusses rescheduling strategies, policies, and methods for production scheduling. Section 2 presents two chapters about flow shop scheduling. Section 3 describes heuristic and metaheuristic methods for treating the scheduling problem in an efficient manner. In addition, two test cases are presented in Section 4. The first uses simulation, while the second shows a real implementation of a production scheduling system. Finally, Section 5 presents some modeling strategies for building production scheduling systems. This book will be of interest to those working in the decision-making branches of production, in various operational research areas, as well as computational methods design. People from a diverse background ranging from academia and research to those working in industry, can take advantage of this volume

    Formal Modelling for Multi-Robot Systems Under Uncertainty

    Get PDF
    Purpose of Review: To effectively synthesise and analyse multi-robot behaviour, we require formal task-level models which accurately capture multi-robot execution. In this paper, we review modelling formalisms for multi-robot systems under uncertainty, and discuss how they can be used for planning, reinforcement learning, model checking, and simulation. Recent Findings: Recent work has investigated models which more accurately capture multi-robot execution by considering different forms of uncertainty, such as temporal uncertainty and partial observability, and modelling the effects of robot interactions on action execution. Other strands of work have presented approaches for reducing the size of multi-robot models to admit more efficient solution methods. This can be achieved by decoupling the robots under independence assumptions, or reasoning over higher level macro actions. Summary: Existing multi-robot models demonstrate a trade off between accurately capturing robot dependencies and uncertainty, and being small enough to tractably solve real world problems. Therefore, future research should exploit realistic assumptions over multi-robot behaviour to develop smaller models which retain accurate representations of uncertainty and robot interactions; and exploit the structure of multi-robot problems, such as factored state spaces, to develop scalable solution methods.Comment: 23 pages, 0 figures, 2 tables. Current Robotics Reports (2023). This version of the article has been accepted for publication, after peer review (when applicable) but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://dx.doi.org/10.1007/s43154-023-00104-

    A petri-net based methodology for modeling, simulation, and control of flexible manufacturing systems

    Get PDF
    Global competition has made it necessary for manufacturers to introduce such advanced technologies as flexible and agile manufacturing, intelligent automation, and computer-integrated manufacturing. However, the application extent of these technologies varies from industry to industry and has met various degrees of success. One critical barrier leading to successful implementation of advanced manufacturing systems is the ever-increasing complexity in their modeling, analysis, simulation, and control. The purpose of this work is to introduce a set of Petri net-based tools and methods to address a variety of problems associated with the design and implementation of flexible manufacturing systems (FMSs). More specifically, this work proposes Petri nets as an integrated tool for modeling, simulation, and control of flexible manufacturing systems (FMSs). The contributions of this work are multifold. First, it demonstrates a new application of PNs for simulation by evaluating the performance of pull and push diagrams in manufacturing systems. Second, it introduces a class of PNs, Augmented-timed Petri nets (ATPNs) in order to increase the power of PNs to simulate and control flexible systems with breakdowns. Third, it proposes a new class of PNs called Realtime Petri nets (RTPNs) for discrete event control of FMS s. The detailed comparison between RTPNs and traditional discrete event methods such as ladder logic diagrams is presented to answer the basic question \u27Why is a PN better tool than ladder logic diagram?\u27 and to justify the PN method. Also, a conversion procedure that automatically generates PN models from a given class of logic control specifications is presented. Finally, a methodology that uses PNs for the development of object-oriented control software is proposed. The present work extends the PN state-of-the-art in two ways. First, it offers a wide scope for engineers and managers who are responsible for the design and the implementation of modem manufacturing systems to evaluate Petri nets for applications in their work. Second, it further develops Petri net-based methods for discrete event control of manufacturing systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems

    Process mining : conformance and extension

    Get PDF
    Today’s business processes are realized by a complex sequence of tasks that are performed throughout an organization, often involving people from different departments and multiple IT systems. For example, an insurance company has a process to handle insurance claims for their clients, and a hospital has processes to diagnose and treat patients. Because there are many activities performed by different people throughout the organization, there is a lack of transparency about how exactly these processes are executed. However, understanding the process reality (the "as is" process) is the first necessary step to save cost, increase quality, or ensure compliance. The field of process mining aims to assist in creating process transparency by automatically analyzing processes based on existing IT data. Most processes are supported by IT systems nowadays. For example, Enterprise Resource Planning (ERP) systems such as SAP log all transaction information, and Customer Relationship Management (CRM) systems are used to keep track of all interactions with customers. Process mining techniques use these low-level log data (so-called event logs) to automatically generate process maps that visualize the process reality from different perspectives. For example, it is possible to automatically create process models that describe the causal dependencies between activities in the process. So far, process mining research has mostly focused on the discovery aspect (i.e., the extraction of models from event logs). This dissertation broadens the field of process mining to include the aspect of conformance and extension. Conformance aims at the detection of deviations from documented procedures by comparing the real process (as recorded in the event log) with an existing model that describes the assumed or intended process. Conformance is relevant for two reasons: 1. Most organizations document their processes in some form. For example, process models are created manually to understand and improve the process, comply with regulations, or for certification purposes. In the presence of existing models, it is often more important to point out the deviations from these existing models than to discover completely new models. Discrepancies emerge because business processes change, or because the models did not accurately reflect the real process in the first place (due to the manual and subjective creation of these models). If the existing models do not correspond to the actual processes, then they have little value. 2. Automatically discovered process models typically do not completely "fit" the event logs from which they were created. These discrepancies are due to noise and/or limitations of the used discovery techniques. Furthermore, in the context of complex and diverse process environments the discovered models often need to be simplified to obtain useful insights. Therefore, it is crucial to be able to check how much a discovered process model actually represents the real process. Conformance techniques can be used to quantify the representativeness of a mined model before drawing further conclusions. They thus constitute an important quality measurement to effectively use process discovery techniques in a practical setting. Once one is confident in the quality of an existing or discovered model, extension aims at the enrichment of these models by the integration of additional characteristics such as time, cost, or resource utilization. By extracting aditional information from an event log and projecting it onto an existing model, bottlenecks can be highlighted and correlations with other process perspectives can be identified. Such an integrated view on the process is needed to understand root causes for potential problems and actually make process improvements. Furthermore, extension techniques can be used to create integrated simulation models from event logs that resemble the real process more closely than manually created simulation models. In Part II of this thesis, we provide a comprehensive framework for the conformance checking of process models. First, we identify the evaluation dimensions fitness, decision/generalization, and structure as the relevant conformance dimensions.We develop several Petri-net based approaches to measure conformance in these dimensions and describe five case studies in which we successfully applied these conformance checking techniques to real and artificial examples. Furthermore, we provide a detailed literature review of related conformance measurement approaches (Chapter 4). Then, we study existing model evaluation approaches from the field of data mining. We develop three data mining-inspired evaluation approaches for discovered process models, one based on Cross Validation (CV), one based on the Minimal Description Length (MDL) principle, and one using methods based on Hidden Markov Models (HMMs). We conclude that process model evaluation faces similar yet different challenges compared to traditional data mining. Additional challenges emerge from the sequential nature of the data and the higher-level process models, which include concurrent dynamic behavior (Chapter 5). Finally, we point out current shortcomings and identify general challenges for conformance checking techniques. These challenges relate to the applicability of the conformance metric, the metric quality, and the bridging of different process modeling languages. We develop a flexible, language-independent conformance checking approach that provides a starting point to effectively address these challenges (Chapter 6). In Part III, we develop a concrete extension approach, provide a general model for process extensions, and apply our approach for the creation of simulation models. First, we develop a Petri-net based decision mining approach that aims at the discovery of decision rules at process choice points based on data attributes in the event log. While we leverage classification techniques from the data mining domain to actually infer the rules, we identify the challenges that relate to the initial formulation of the learning problem from a process perspective. We develop a simple approach to partially overcome these challenges, and we apply it in a case study (Chapter 7). Then, we develop a general model for process extensions to create integrated models including process, data, time, and resource perspective.We develop a concrete representation based on Coloured Petri-nets (CPNs) to implement and deploy this model for simulation purposes (Chapter 8). Finally, we evaluate the quality of automatically discovered simulation models in two case studies and extend our approach to allow for operational decision making by incorporating the current process state as a non-empty starting point in the simulation (Chapter 9). Chapter 10 concludes this thesis with a detailed summary of the contributions and a list of limitations and future challenges. The work presented in this dissertation is supported and accompanied by concrete implementations, which have been integrated in the ProM and ProMimport frameworks. Appendix A provides a comprehensive overview about the functionality of the developed software. The results presented in this dissertation have been presented in more than twenty peer-reviewed scientific publications, including several high-quality journals

    HIERARCHICAL HYBRID-MODEL BASED DESIGN, VERIFICATION, SIMULATION, AND SYNTHESIS OF MISSION CONTROL FOR AUTONOMOUS UNDERWATER VEHICLES

    Get PDF
    The objective of modeling, verification, and synthesis of hierarchical hybrid mission control for underwater vehicle is to (i) propose a hierarchical architecture for mission control for an autonomous system, (ii) develop extended hybrid state machine models for the mission control, (iii) use these models to verify for logical correctness, (iv) check the feasibility of a simulation software to model the mission executed by an autonomous underwater vehicle (AUV) (v) perform synthesis of high-level mission coordinators for coordinating lower-level mission controllers in accordance with the given mission, and (vi) suggest further design changes for improvement. The dissertation describes a hierarchical architecture in which mission level controllers based on hybrid systems theory have been, and are being developed using a hybrid systems design tool that allows graphical design, iterative redesign, and code generation for rapid deployment onto the target platform. The goal is to support current and future autonomous underwater vehicle (AUV) programs to meet evolving requirements and capabilities. While the tool facilitates rapid redesign and deployment, it is crucial to include safety and performance verification into each step of the (re)design process. To this end, the modeling of the hierarchical hybrid mission controller is formalized to facilitate the use of available tools and newly developed methods for formal verification of safety and performance specifications. A hierarchical hybrid architecture for mission control of autonomous systems with application to AUVs is proposed and a theoretical framework for the models that make up the architecture is outlined. An underwater vehicle like any other autonomous system is a hybrid system, as the dynamics of the vehicle as well as its vehicle level control is continuous whereas the mission level control is discrete, making the overall system a hybrid system i.e., one possessing both continuous and discrete states. The hybrid state machine models of the mission controller modules is derived from their implementation done using TEJA, a software for representing hybrid systems with support for auto code generation. The verification of their logical correctness properties has been done using UPPAAL, a software tool for verification of timed automata a special kind of hybrid system. A Teja to Uppaal converter, called dem2xml, has been created at Applied Reserarch Lab that converts a hybrid (timed) autonomous system description in Teja to an Uppaal system description. Verification work involved developing abstract models for the lower level vehicle controllers with which the mission controller modules interact and follow a hierarchical approach: Assuming the correctness of level-zero or vehicle controllers, we establish the correctness of level-one mission controller modules, and then the correctness of level-two modules, etc. The goal of verification is to show that any valid meaning for a mission formalized in our research verifies the safe and correct execution of actions. Simulation of the sequence of actions executed for each of the operations give a better view of the combined working of the mission coordinators and the low level controllers. So we next looked into the feasibility of simulating the operations executed during a mission. A Perl program has been developed to convert the UPPAAL files in .xml format to OpenGL graphic files. The graphic files simulate the steps involved in the execution of a sequence of operations executed by an AUV. The highest level coordinators send mission orders to be executed by the lower level controllers. So a more generalized design of the highest level controllers would help to incorporate the execution of a variety of missions for a vast field of applications. Initially, we consider manually synthesized mission coordinator modules. Later we design automated synthesis of coordinators. This method synthesizes mission coordinators which coordinate the lower level controllers for the execution of the missions ordered and can be used for any autonomous system

    A Compositional Approach to Embedded System Design

    Get PDF
    An important observable trend in embedded system design is the growing system complexity. Besides the sheer increase of functionality, the growing complexity has another dimension which is the resulting heterogeneity with respect to the different functions and components of an embedded system. This means that functions from different application domains are tightly coupled in a single embedded system. It is established industry practice that specialized specification languages and design environments are used in each application domain. The resulting heterogeneity of the specification is increased even further by reused components (legacy code, IP). Since there is little hope that a single suitable language will replace this heterogeneous set of languages, multi-language design is becoming increasingly important for complex embedded systems. The key problems in the context of multi-language design are the safe integration of the differently specified subsystems and the optimized implementation of the whole system. Both require the reliable validation of the system function as well as of the non-functional system properties. Current cosimulation-based approaches are well suited for functional validation and debugging. However, these approaches are less powerful for the validation of non-functional system properties. In this dissertation, a novel compositional approach to embedded system design is presented which augments existing cosimulation-based design flows with formal analysis capabilities regarding non-functional system properties such as timing or power consumption. Starting from a truly multi-language specification, the system is transformed into an abstract internal design representation which serves as basis for system-wide analysis and optimization.Ein wesentlicher Trend im Entwurf eingebetteter Systeme ist die steigende Komplexität der zu entwerfenden Systeme. Neben der zunehmenden Funktionalität hat die steigende Komplexität eine weitere Dimension: die resultierende Heterogenität bezüglich der verschiedenen Funktionen und Komponenten eines eingebetteten Systems. Dies bedeutet, daß Funktionen aus verschiedenen Anwendungsbereichen in einem einzelnen System eng miteinander kooperieren. Es ist in der industriellen Praxis etabliert, daß in jedem Anwendungsbereich spezialisierte Spezifikationssprachen zum Einsatz kommen. Da wenig Hoffnung besteht, daß eine einzige geeignete Sprache diesen heterogenen Mix von Sprachen ersetzen wird, gewinnt der mehrsprachige Entwurf für komplexe eingebettete Systeme an Bedeutung. Die Hauptprobleme im Bereich des mehrsprachigen Entwurfs sind die sichere Integration der verschieden spezifizierten Teilsysteme und die optimierte Implementierung des gesamten Systems. Beide Probleme verlangen eine zuverlässige Validierung der Systemfunktion sowie der nichtfunktionalen Systemeigenschaften. Heutige cosimulationsbasierte Ansätze aus Forschung und Industrie sind gut geeignet für die funktionale Validierung und Fehlersuche, haben aber Schwächen bei der Validierung nichtfunktionaler Systemeigenschaften. In der vorliegenden Arbeit wird ein neuartiger kompositionaler Ansatz für den Entwurf eingebetteter Systeme vorgestellt, der existierende cosimulationsbasierte Entwurfsflüsse um Fähigkeiten zur Analyse nichtfunktionaler Systemeigenschaften ergänzt. Ausgehend von einer mehrsprachigen Spezifikation, wird das System in eine abstrakte homogene interne Darstellung transformiert, die als Grundlage für die systemweite Analyse und Optimierung dient

    Coping with the State Explosion Problem in Formal Methods: Advanced Abstraction Techniques and Big Data Approaches.

    Get PDF
    Formal verification of dynamic, concurrent and real-time systems has been the focus of several decades of software engineering research. Formal verification requires high-performance data processing software for extracting knowledge from the unprecedented amount of data containing all reachable states and all transitions that systems can make among those states, for instance, the extraction of specific reachable states, traces, and more. One of the most challenging task in this context is the development of tools able to cope with the complexity of real-world models analysis. Many methods have been proposed to alleviate this problem. For instance, advanced state space techniques aim at reducing the data needed to be constructed in order to verify certain properties. Other directions are the efficient implementation of such analysis techniques, and studying ways to parallelize the algorithms in order to exploit multi-core and distributed architectures. Since cloud-based computing resources have became easily accessible, there is an opportunity for verification techniques and tools to undergo a deep technological transition to exploit the new available architectures. This has created an increasing interest in parallelizing and distributing verification techniques. Cloud computing is an emerging and evolving paradigm where challenges and opportunities allow for new research directions and applications. There is an evidence that this trend will continue, in fact several companies are putting remarkable efforts in delivering services able to offer hundreds, or even thousands, commodity computers available to customers, thus enabling users to run massively parallel jobs. This revolution is already started in different scientific fields, achieving remarkable breakthroughs through new kinds of experiments that would have been impossible only few years ago. Anyway, despite many years of work in the area of multi-core and distributed model checking, still few works introduce algorithms that can scale effortlessly to the use of thousands of loosely connected computers in a network, so existing technology does not yet allow us to take full advantage of the vast array of compute power of a "cloud" environment. Moreover, despite model checking software tools are so called "push-button", managing a high-performance computing environment required by distributed scientific applications, is far from being considered such, especially whenever one wants to exploit general purpose cloud computing facilities. The thesis focuses on two complementary approaches to deal with the state explosion problem in formal verification. On the one hand we try to decrease the exploration space by studying advanced state space methods for real-time systems modeled with Time Basic Petri nets. In particular, we addressed and solved several different open problems for such a modeling formalism. On the other hand, we try to increase the computational power by introducing approaches, techniques and software tools that allow us to leverage the "big data" trend to some extent. In particular, we provided frameworks and software tools that can be easily specialized to deal with the construction and verification of very huge state spaces of different kinds of formalisms by exploiting big data approaches and cloud computing infrastructures
    corecore