110 research outputs found

    Discrete and hybrid methods for the diagnosis of distributed systems

    Get PDF
    Many important activities of modern society rely on the proper functioning of complex systems such as electricity networks, telecommunication networks, manufacturing plants and aircrafts. The supervision of such systems must include strong diagnosis capability to be able to effectively detect the occurrence of faults and ensure appropriate corrective measures can be taken in order to recover from the faults or prevent total failure. This thesis addresses issues in the diagnosis of large complex systems. Such systems are usually distributed in nature, i.e. they consist of many interconnected components each having their own local behaviour. These components interact together to produce an emergent global behaviour that is complex. As those systems increase in complexity and size, their diagnosis becomes increasingly challenging. In the first part of this thesis, a method is proposed for diagnosis on distributed systems that avoids a monolithic global computation. The method, based on converting the graph of the system into a junction tree, takes into account the topology of the system in choosing how to merge local diagnoses on the components while still obtaining a globally consistent result. The method is shown to work well for systems with tree or near-tree structures. This method is further extended to handle systems with high clustering by selectively ignoring some connections that would still allow an accurate diagnosis to be obtained. A hybrid system approach is explored in the second part of the thesis, where continuous dynamics information on the system is also retained to help better isolate or identify faults. A hybrid system framework is presented that models both continuous dynamics and discrete evolution in dynamical systems, based on detecting changes in the fundamental governing dynamics of the system rather than on residual estimation. This makes it possible to handle systems that might not be well characterised and where parameter drift is present. The discrete aspect of the hybrid system model is used to derive diagnosability conditions using indicator functions for the detection and isolation of multiple, arbitrary sequential or simultaneous events in hybrid dynamical networks. Issues with diagnosis in the presence of uncertainty in measurements due sensor or actuator noise are addressed. Faults may generate symptoms that are in the same order of magnitude as the latter. The use of statistical techniques,within a hybrid system framework, is proposed to detect these elusive fault symptoms and translate this information into probabilities for the actual operational mode and possibility of transition between modes which makes it possible to apply probabilistic analysis on the system to handle the underlying uncertainty present

    Efficient Detection on Stochastic Faults in PLC Based Automated Assembly Systems With Novel Sensor Deployment and Diagnoser Design

    Get PDF
    In this dissertation, we proposed solutions on novel sensor deployment and diagnoser design to efficiently detect stochastic faults in PLC based automated systems First, a fuzzy quantitative graph based sensor deployment was called upon to model cause-effect relationship between faults and sensors. Analytic hierarchy process (AHP) was used to aggregate the heterogeneous properties between sensors and faults into single edge values in fuzzy graph, thus quantitatively determining the fault detectability. An appropriate multiple objective model was set up to minimize fault unobservability and cost while achieving required detectability performance. Lexicographical mixed integer linear programming and greedy search were respectively used to optimize the model, thus assigning the sensors to faults. Second, a diagnoser based on real time fuzzy Petri net (RTFPN) was proposed to detect faults in discrete manufacturing systems. It used the real time PN to model the manufacturing plant while using fuzzy PN to isolate the faults. It has the capability of handling uncertainties and including industry knowledge to diagnose faults. The proposed approach was implemented using Visual Basic, and tested as well as validated on a dual robot arm. Finally, the proposed sensor deployment approach and diagnoser were comprehensively evaluated based on design of experiment techniques. Two-stage statistical analysis including analysis of variance (ANOVA) and least significance difference (LSD) were conducted to evaluate the diagnosis performance including positive detection rate, false alarm, accuracy and detect delay. It illustrated the proposed approaches have better performance on those evaluation metrics. The major contributions of this research include the following aspects: (1) a novel fuzzy quantitative graph based sensor deployment approach handling sensor heterogeneity, and optimizing multiple objectives based on lexicographical integer linear programming and greedy algorithm, respectively. A case study on a five tank system showed that system detectability was improved from the approach of signed directed graph's 0.62 to the proposed approach's 0.70. The other case study on a dual robot arm also show improvement on system's detectability improved from the approach of signed directed graph's 0.61 to the proposed approach's 0.65. (2) A novel real time fuzzy Petri net diagnoser was used to remedy nonsynchronization and integrate useful but incomplete knowledge for diagnosis purpose. The third case study on a dual robot arm shows that the diagnoser can achieve a high detection accuracy of 93% and maximum detection delay of eight steps. (3) The comprehensive evaluation approach can be referenced by other diagnosis systems' design, optimization and evaluation

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Conceptual Models for Assessment & Assurance of Dependability, Security and Privacy in the Eternal CONNECTed World

    Get PDF
    This is the first deliverable of WP5, which covers Conceptual Models for Assessment & Assurance of Dependability, Security and Privacy in the Eternal CONNECTed World. As described in the project DOW, in this document we cover the following topics: • Metrics definition • Identification of limitations of current V&V approaches and exploration of extensions/refinements/ new developments • Identification of security, privacy and trust models WP5 focus is on dependability concerning the peculiar aspects of the project, i.e., the threats deriving from on-the-fly synthesis of CONNECTors. We explore appropriate means for assessing/guaranteeing that the CONNECTed System yields acceptable levels for non-functional properties, such as reliability (e.g., the CONNECTor will ensure continued communication without interruption), security and privacy (e.g., the transactions do not disclose confidential data), trust (e.g., Networked Systems are put in communication only with parties they trust). After defining a conceptual framework for metrics definition, we present the approaches to dependability in CONNECT, which cover: i) Model-based V&V, ii) Security enforcement and iii) Trust management. The approaches are centered around monitoring, to allow for on-line analysis. Monitoring is performed alongside the functionalities of the CONNECTed System and is used to detect conditions that are deemed relevant by its clients (i.e., the other CONNECT Enablers). A unified lifecycle encompassing dependability analysis, security enforcement and trust management is outlined, spanning over discovery time, synthesis time and execution time

    SEGMENTATION, RECOGNITION, AND ALIGNMENT OF COLLABORATIVE GROUP MOTION

    Get PDF
    Modeling and recognition of human motion in videos has broad applications in behavioral biometrics, content-based visual data analysis, security and surveillance, as well as designing interactive environments. Significant progress has been made in the past two decades by way of new models, methods, and implementations. In this dissertation, we focus our attention on a relatively less investigated sub-area called collaborative group motion analysis. Collaborative group motions are those that typically involve multiple objects, wherein the motion patterns of individual objects may vary significantly in both space and time, but the collective motion pattern of the ensemble allows characterization in terms of geometry and statistics. Therefore, the motions or activities of an individual object constitute local information. A framework to synthesize all local information into a holistic view, and to explicitly characterize interactions among objects, involves large scale global reasoning, and is of significant complexity. In this dissertation, we first review relevant previous contributions on human motion/activity modeling and recognition, and then propose several approaches to answer a sequence of traditional vision questions including 1) which of the motion elements among all are the ones relevant to a group motion pattern of interest (Segmentation); 2) what is the underlying motion pattern (Recognition); and 3) how two motion ensembles are similar and how we can 'optimally' transform one to match the other (Alignment). Our primary practical scenario is American football play, where the corresponding problems are 1) who are offensive players; 2) what are the offensive strategy they are using; and 3) whether two plays are using the same strategy and how we can remove the spatio-temporal misalignment between them due to internal or external factors. The proposed approaches discard traditional modeling paradigm but explore either concise descriptors, hierarchies, stochastic mechanism, or compact generative model to achieve both effectiveness and efficiency. In particular, the intrinsic geometry of the spaces of the involved features/descriptors/quantities is exploited and statistical tools are established on these nonlinear manifolds. These initial attempts have identified new challenging problems in complex motion analysis, as well as in more general tasks in video dynamics. The insights gained from nonlinear geometric modeling and analysis in this dissertation may hopefully be useful toward a broader class of computer vision applications

    Discrete Event Systems: Models and Applications; Proceedings of an IIASA Conference, Sopron, Hungary, August 3-7, 1987

    Get PDF
    Work in discrete event systems has just begun. There is a great deal of activity now, and much enthusiasm. There is considerable diversity reflecting differences in the intellectual formation of workers in the field and in the applications that guide their effort. This diversity is manifested in a proliferation of DEM formalisms. Some of the formalisms are essentially different. Some of the "new" formalisms are reinventions of existing formalisms presented in new terms. These "duplications" reveal both the new domains of intended application as well as the difficulty in keeping up with work that is published in journals on computer science, communications, signal processing, automatic control, and mathematical systems theory - to name the main disciplines with active research programs in discrete event systems. The first eight papers deal with models at the logical level, the next four are at the temporal level and the last six are at the stochastic level. Of these eighteen papers, three focus on manufacturing, four on communication networks, one on digital signal processing, the remaining ten papers address methodological issues ranging from simulation to computational complexity of some synthesis problems. The authors have made good efforts to make their contributions self-contained and to provide a representative bibliography. The volume should therefore be both accessible and useful to those who are just getting interested in discrete event systems

    Interactive generation and learning of semantic-driven robot behaviors

    Get PDF
    The generation of adaptive and reflexive behavior is a challenging task in artificial intelligence and robotics. In this thesis, we develop a framework for knowledge representation, acquisition, and behavior generation that explicitly incorporates semantics, adaptive reasoning and knowledge revision. By using our model, semantic information can be exploited by traditional planning and decision making frameworks to generate empirically effective and adaptive robot behaviors, as well as to enable complex but natural human-robot interactions. In our work, we introduce a model of semantic mapping, we connect it with the notion of affordances, and we use those concepts to develop semantic-driven algorithms for knowledge acquisition, update, learning and robot behavior generation. In particular, we apply such models within existing planning and decision making frameworks to achieve semantic-driven and adaptive robot behaviors in a generic environment. On the one hand, this work generalizes existing semantic mapping models and extends them to include the notion of affordances. On the other hand, this work integrates semantic information within well-defined long-term planning and situated action frameworks to effectively generate adaptive robot behaviors. We validate our approach by evaluating it on a number of problems and robot tasks. In particular, we consider service robots deployed in interactive and social domains, such as offices and domestic environments. To this end, we also develop prototype applications that are useful for evaluation purposes

    Process mining : conformance and extension

    Get PDF
    Today’s business processes are realized by a complex sequence of tasks that are performed throughout an organization, often involving people from different departments and multiple IT systems. For example, an insurance company has a process to handle insurance claims for their clients, and a hospital has processes to diagnose and treat patients. Because there are many activities performed by different people throughout the organization, there is a lack of transparency about how exactly these processes are executed. However, understanding the process reality (the "as is" process) is the first necessary step to save cost, increase quality, or ensure compliance. The field of process mining aims to assist in creating process transparency by automatically analyzing processes based on existing IT data. Most processes are supported by IT systems nowadays. For example, Enterprise Resource Planning (ERP) systems such as SAP log all transaction information, and Customer Relationship Management (CRM) systems are used to keep track of all interactions with customers. Process mining techniques use these low-level log data (so-called event logs) to automatically generate process maps that visualize the process reality from different perspectives. For example, it is possible to automatically create process models that describe the causal dependencies between activities in the process. So far, process mining research has mostly focused on the discovery aspect (i.e., the extraction of models from event logs). This dissertation broadens the field of process mining to include the aspect of conformance and extension. Conformance aims at the detection of deviations from documented procedures by comparing the real process (as recorded in the event log) with an existing model that describes the assumed or intended process. Conformance is relevant for two reasons: 1. Most organizations document their processes in some form. For example, process models are created manually to understand and improve the process, comply with regulations, or for certification purposes. In the presence of existing models, it is often more important to point out the deviations from these existing models than to discover completely new models. Discrepancies emerge because business processes change, or because the models did not accurately reflect the real process in the first place (due to the manual and subjective creation of these models). If the existing models do not correspond to the actual processes, then they have little value. 2. Automatically discovered process models typically do not completely "fit" the event logs from which they were created. These discrepancies are due to noise and/or limitations of the used discovery techniques. Furthermore, in the context of complex and diverse process environments the discovered models often need to be simplified to obtain useful insights. Therefore, it is crucial to be able to check how much a discovered process model actually represents the real process. Conformance techniques can be used to quantify the representativeness of a mined model before drawing further conclusions. They thus constitute an important quality measurement to effectively use process discovery techniques in a practical setting. Once one is confident in the quality of an existing or discovered model, extension aims at the enrichment of these models by the integration of additional characteristics such as time, cost, or resource utilization. By extracting aditional information from an event log and projecting it onto an existing model, bottlenecks can be highlighted and correlations with other process perspectives can be identified. Such an integrated view on the process is needed to understand root causes for potential problems and actually make process improvements. Furthermore, extension techniques can be used to create integrated simulation models from event logs that resemble the real process more closely than manually created simulation models. In Part II of this thesis, we provide a comprehensive framework for the conformance checking of process models. First, we identify the evaluation dimensions fitness, decision/generalization, and structure as the relevant conformance dimensions.We develop several Petri-net based approaches to measure conformance in these dimensions and describe five case studies in which we successfully applied these conformance checking techniques to real and artificial examples. Furthermore, we provide a detailed literature review of related conformance measurement approaches (Chapter 4). Then, we study existing model evaluation approaches from the field of data mining. We develop three data mining-inspired evaluation approaches for discovered process models, one based on Cross Validation (CV), one based on the Minimal Description Length (MDL) principle, and one using methods based on Hidden Markov Models (HMMs). We conclude that process model evaluation faces similar yet different challenges compared to traditional data mining. Additional challenges emerge from the sequential nature of the data and the higher-level process models, which include concurrent dynamic behavior (Chapter 5). Finally, we point out current shortcomings and identify general challenges for conformance checking techniques. These challenges relate to the applicability of the conformance metric, the metric quality, and the bridging of different process modeling languages. We develop a flexible, language-independent conformance checking approach that provides a starting point to effectively address these challenges (Chapter 6). In Part III, we develop a concrete extension approach, provide a general model for process extensions, and apply our approach for the creation of simulation models. First, we develop a Petri-net based decision mining approach that aims at the discovery of decision rules at process choice points based on data attributes in the event log. While we leverage classification techniques from the data mining domain to actually infer the rules, we identify the challenges that relate to the initial formulation of the learning problem from a process perspective. We develop a simple approach to partially overcome these challenges, and we apply it in a case study (Chapter 7). Then, we develop a general model for process extensions to create integrated models including process, data, time, and resource perspective.We develop a concrete representation based on Coloured Petri-nets (CPNs) to implement and deploy this model for simulation purposes (Chapter 8). Finally, we evaluate the quality of automatically discovered simulation models in two case studies and extend our approach to allow for operational decision making by incorporating the current process state as a non-empty starting point in the simulation (Chapter 9). Chapter 10 concludes this thesis with a detailed summary of the contributions and a list of limitations and future challenges. The work presented in this dissertation is supported and accompanied by concrete implementations, which have been integrated in the ProM and ProMimport frameworks. Appendix A provides a comprehensive overview about the functionality of the developed software. The results presented in this dissertation have been presented in more than twenty peer-reviewed scientific publications, including several high-quality journals
    • …
    corecore