158 research outputs found

    Methods and Systems for Fault Diagnosis in Nuclear Power Plants

    Get PDF
    This research mainly deals with fault diagnosis in nuclear power plants (NPP), based on a framework that integrates contributions from fault scope identification, optimal sensor placement, sensor validation, equipment condition monitoring, and diagnostic reasoning based on pattern analysis. The research has a particular focus on applications where data collected from the existing SCADA (supervisory, control, and data acquisition) system is not sufficient for the fault diagnosis system. Specifically, the following methods and systems are developed. A sensor placement model is developed to guide optimal placement of sensors in NPPs. The model includes 1) a method to extract a quantitative fault-sensor incidence matrix for a system; 2) a fault diagnosability criterion based on the degree of singularities of the incidence matrix; and 3) procedures to place additional sensors to meet the diagnosability criterion. Usefulness of the proposed method is demonstrated on a nuclear power plant process control test facility (NPCTF). Experimental results show that three pairs of undiagnosable faults can be effectively distinguished with three additional sensors selected by the proposed model. A wireless sensor network (WSN) is designed and a prototype is implemented on the NPCTF. WSN is an effective tool to collect data for fault diagnosis, especially for systems where additional measurements are needed. The WSN has distributed data processing and information fusion for fault diagnosis. Experimental results on the NPCTF show that the WSN system can be used to diagnose all six fault scenarios considered for the system. A fault diagnosis method based on semi-supervised pattern classification is developed which requires significantly fewer training data than is typically required in existing fault diagnosis models. It is a promising tool for applications in NPPs, where it is usually difficult to obtain training data under fault conditions for a conventional fault diagnosis model. The proposed method has successfully diagnosed nine types of faults physically simulated on the NPCTF. For equipment condition monitoring, a modified S-transform (MST) algorithm is developed by using shaping functions, particularly sigmoid functions, to modify the window width of the existing standard S-transform. The MST can achieve superior time-frequency resolution for applications that involves non-stationary multi-modal signals, where classical methods may fail. Effectiveness of the proposed algorithm is demonstrated using a vibration test system as well as applications to detect a collapsed pipe support in the NPCTF. The experimental results show that by observing changes in time-frequency characteristics of vibration signals, one can effectively detect faults occurred in components of an industrial system. To ensure that a fault diagnosis system does not suffer from erroneous data, a fault detection and isolation (FDI) method based on kernel principal component analysis (KPCA) is extended for sensor validations, where sensor faults are detected and isolated from the reconstruction errors of a KPCA model. The method is validated using measurement data from a physical NPP. The NPCTF is designed and constructed in this research for experimental validations of fault diagnosis methods and systems. Faults can be physically simulated on the NPCTF. In addition, the NPCTF is designed to support systems based on different instrumentation and control technologies such as WSN and distributed control systems. The NPCTF has been successfully utilized to validate the algorithms and WSN system developed in this research. In a real world application, it is seldom the case that one single fault diagnostic scheme can meet all the requirements of a fault diagnostic system in a nuclear power. In fact, the values and performance of the diagnosis system can potentially be enhanced if some of the methods developed in this thesis can be integrated into a suite of diagnostic tools. In such an integrated system, WSN nodes can be used to collect additional data deemed necessary by sensor placement models. These data can be integrated with those from existing SCADA systems for more comprehensive fault diagnosis. An online performance monitoring system monitors the conditions of the equipment and provides key information for the tasks of condition-based maintenance. When a fault is detected, the measured data are subsequently acquired and analyzed by pattern classification models to identify the nature of the fault. By analyzing the symptoms of the fault, root causes of the fault can eventually be identified

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Supervisory Control and Analysis of Partially-observed Discrete Event Systems

    Get PDF
    Nowadays, a variety of real-world systems fall into discrete event systems (DES). In practical scenarios, due to facts like limited sensor technique, sensor failure, unstable network and even the intrusion of malicious agents, it might occur that some events are unobservable, multiple events are indistinguishable in observations, and observations of some events are nondeterministic. By considering various practical scenarios, increasing attention in the DES community has been paid to partially-observed DES, which in this thesis refer broadly to those DES with partial and/or unreliable observations. In this thesis, we focus on two topics of partially-observed DES, namely, supervisory control and analysis. The first topic includes two research directions in terms of system models. One is the supervisory control of DES with both unobservable and uncontrollable events, focusing on the forbidden state problem; the other is the supervisory control of DES vulnerable to sensor-reading disguising attacks (SD-attacks), which is also interpreted as DES with nondeterministic observations, addressing both the forbidden state problem and the liveness-enforcing problem. Petri nets (PN) are used as a reference formalism in this topic. First, we study the forbidden state problem in the framework of PN with both unobservable and uncontrollable transitions, assuming that unobservable transitions are uncontrollable. For ordinary PN subject to an admissible Generalized Mutual Exclusion Constraint (GMEC), an optimal on-line control policy with polynomial complexity is proposed provided that a particular subnet, called observation subnet, satisfies certain conditions in structure. It is then discussed how to obtain an optimal on-line control policy for PN subject to an arbitrary GMEC. Next, we still consider the forbidden state problem but in PN vulnerable to SD-attacks. Assuming the control specification in terms of a GMEC, we propose three methods to derive on-line control policies. The first two lead to an optimal policy but are computationally inefficient for large-size systems, while the third method computes a policy with timely response even for large-size systems but at the expense of optimality. Finally, we investigate the liveness-enforcing problem still assuming that the system is vulnerable to SD-attacks. In this problem, the plant is modelled as a bounded PN, which allows us to off-line compute a supervisor starting from constructing the reachability graph of the PN. Then, based on repeatedly computing a more restrictive liveness-enforcing supervisor under no attack and constructing a basic supervisor, an off-line method that synthesizes a liveness-enforcing supervisor tolerant to an SD-attack is proposed. In the second topic, we care about the verification of properties related to system security. Two properties are considered, i.e., fault-predictability and event-based opacity. The former is a property in the literature, characterizing the situation that the occurrence of any fault in a system is predictable, while the latter is a newly proposed property in the thesis, which describes the fact that secret events of a system cannot be revealed to an external observer within their critical horizons. In the case of fault-predictability, DES are modeled by labeled PN. A necessary and sufficient condition for fault-predictability is derived by characterizing the structure of the Predictor Graph. Furthermore, two rules are proposed to reduce the size of a PN, which allow us to analyze the fault-predictability of the original net by verifying that of the reduced net. When studying event-based opacity, we use deterministic finite-state automata as the reference formalism. Considering different scenarios, we propose four notions, namely, K-observation event-opacity, infinite-observation event-opacity, event-opacity and combinational event-opacity. Moreover, verifiers are proposed to analyze these properties

    Towards a Smart World: Hazard Levels for Monitoring of Autonomous Vehicles’ Swarms

    Get PDF
    This work explores the creation of quantifiable indices to monitor the safe operations and movement of families of autonomous vehicles (AV) in restricted highway-like environments. Specifically, this work will explore the creation of ad-hoc rules for monitoring lateral and longitudinal movement of multiple AVs based on behavior that mimics swarm and flock movement (or particle swarm motion). This exploratory work is sponsored by the Emerging Leader Seed grant program of the Mineta Transportation Institute and aims at investigating feasibility of adaptation of particle swarm motion to control families of autonomous vehicles. Specifically, it explores how particle swarm approaches can be augmented by setting safety thresholds and fail-safe mechanisms to avoid collisions in off-nominal situations. This concept leverages the integration of the notion of hazard and danger levels (i.e., measures of the “closeness” to a given accident scenario, typically used in robotics) with the concept of safety distance and separation/collision avoidance for ground vehicles. A draft of implementation of four hazard level functions indicates that safety thresholds can be set up to autonomously trigger lateral and longitudinal motion control based on three main rules respectively based on speed, heading, and braking distance to steer the vehicle and maintain separation/avoid collisions in families of autonomous vehicles. The concepts here presented can be used to set up a high-level framework for developing artificial intelligence algorithms that can serve as back-up to standard machine learning approaches for control and steering of autonomous vehicles. Although there are no constraints on the concept’s implementation, it is expected that this work would be most relevant for highly-automated Level 4 and Level 5 vehicles, capable of communicating with each other and in the presence of a monitoring ground control center for the operations of the swarm

    Une approche efficace pour l’étude de la diagnosticabilité et le diagnostic des SED modélisés par Réseaux de Petri labellisés : contextes atemporel et temporel

    Get PDF
    This PhD thesis deals with fault diagnosis of discrete event systems using Petri net models. Some on-the-fly and incremental techniques are developed to reduce the state explosion problem while analyzing diagnosability. In the untimed context, an algebraic representation for labeled Petri nets (LPNs) is developed for featuring system behavior. The diagnosability of LPN models is tackled by analyzing a series of K-diagnosability problems. Two models called respectively FM-graph and FM-set tree are developed and built on the fly to record the necessary information for diagnosability analysis. Finally, a diagnoser is derived from the FM-set tree for online diagnosis. In the timed context, time interval splitting techniques are developed in order to make it possible to generate a state representation of labeled time Petri net (LTPN) models, for which techniques from the untimed context can be used to analyze diagnosability. Based on this, necessary and sufficient conditions for the diagnosability of LTPN models are determined. Moreover, we provide the solution for the minimum delay ∆ that ensures diagnosability. From a practical point of view, diagnosability analysis is performed on the basis of on-the-fly building of a structure that we call ASG and which holds fault information about the LTPN states. Generally, using on-the-fly analysis and incremental technique makes it possible to build and investigate only a part of the state space, even in the case when the system is diagnosable. Simulation results obtained on some chosen benchmarks show the efficiency in terms of time and memory compared with the traditional approaches using state enumerationCette thèse s'intéresse à l'étude des problèmes de diagnostic des fautes sur les systèmes à événements discrets en utilisant les modèles réseau de Petri. Des techniques d'exploration incrémentale et à-la-volée sont développées pour combattre le problème de l'explosion de l'état lors de l'analyse de la diagnosticabilité. Dans le contexte atemporel, la diagnosticabilité de modèles RdP-L est abordée par l'analyse d'une série de problèmes K-diagnosticabilité. L'analyse de la diagnosticabilité est effectuée sur la base de deux modèles nommés respectivement FM-graph et FM-set tree qui sont développés à-la-volée. Un diagnostiqueur peut être dérivé à partir du FM-set tree pour le diagnostic en ligne. Dans le contexte temporel, les techniques de fractionnement des intervalles de temps sont élaborées pour développer représentation de l'espace d'état des RdP-LT pour laquelle des techniques d'analyse de la diagnosticabilité peuvent être utilisées. Sur cette base, les conditions nécessaires et suffisantes pour la diagnosticabilité de RdP-LT ont été déterminées. En pratique, l'analyse de la diagnosticabilité est effectuée sur la base de la construction à-la-volée d'une structure nommée ASG et qui contient des informations relatives à l'occurrence de fautes. D'une manière générale, l'analyse effectuée sur la base des techniques à-la-volée et incrémentale permet de construire et explorer seulement une partie de l'espace d'état, même lorsque le système est diagnosticable. Les résultats des simulations effectuées sur certains benchmarks montrent l'efficacité de ces techniques en termes de temps et de mémoire par rapport aux approches traditionnelles basées sur l'énumération des état
    corecore