2,409 research outputs found

    FAST : a fault detection and identification software tool

    Get PDF
    The aim of this work is to improve the reliability and safety of complex critical control systems by contributing to the systematic application of fault diagnosis. In order to ease the utilization of fault detection and isolation (FDI) tools in the industry, a systematic approach is required to allow the process engineers to analyze a system from this perspective. In this way, it should be possible to analyze this system to find if it provides the required fault diagnosis and redundancy according to the process criticality. In addition, it should be possible to evaluate what-if scenarios by slightly modifying the process (f.i. adding sensors or changing their placement) and evaluating the impact in terms of the fault diagnosis and redundancy possibilities. Hence, this work proposes an approach to analyze a process from the FDI perspective and for this purpose provides the tool FAST which covers from the analysis and design phase until the final FDI supervisor implementation in a real process. To synthesize the process information, a very simple format has been defined based on XML. This format provides the needed information to systematically perform the Structural Analysis of that process. Any process can be analyzed, the only restriction is that the models of the process components need to be available in the FAST tool. The processes are described in FAST in terms of process variables, components and relations and the tool performs the structural analysis of the process obtaining: (i) the structural matrix, (ii) the perfect matching, (iii) the analytical redundancy relations (if any) and (iv) the fault signature matrix. To aid in the analysis process, FAST can operate stand alone in simulation mode allowing the process engineer to evaluate the faults, its detectability and implement changes in the process components and topology to improve the diagnosis and redundancy capabilities. On the other hand, FAST can operate on-line connected to the process plant through an OPC interface. The OPC interface enables the possibility to connect to almost any process which features a SCADA system for supervisory control. When running in on-line mode, the process is monitored by a software agent known as the Supervisor Agent. FAST has also the capability of implementing distributed FDI using its multi-agent architecture. The tool is able to partition complex industrial processes into subsystems, identify which process variables need to be shared by each subsystem and instantiate a Supervision Agent for each of the partitioned subsystems. The Supervision Agents once instantiated will start diagnosing their local components and handle the requests to provide the variable values which FAST has identified as shared with other agents to support the distributed FDI process.Per tal de facilitar la utilitzaciĂł d'eines per la detecciĂł i identificaciĂł de fallades (FDI) en la indĂșstria, es requereix un enfocament sistemĂ tic per permetre als enginyers de processos analitzar un sistema des d'aquesta perspectiva. D'aquesta forma, hauria de ser possible analitzar aquest sistema per determinar si proporciona el diagnosi de fallades i la redundĂ ncia d'acord amb la seva criticitat. A mĂ©s, hauria de ser possible avaluar escenaris de casos modificant lleugerament el procĂ©s (per exemple afegint sensors o canviant la seva localitzaciĂł) i avaluant l'impacte en quant a les possibilitats de diagnosi de fallades i redundĂ ncia. Per tant, aquest projecte proposa un enfocament per analitzar un procĂ©s des de la perspectiva FDI i per tal d'implementar-ho proporciona l'eina FAST la qual cobreix des de la fase d'anĂ lisi i disseny fins a la implementaciĂł final d'un supervisor FDI en un procĂ©s real. Per sintetitzar la informaciĂł del procĂ©s s'ha definit un format simple basat en XML. Aquest format proporciona la informaciĂł necessĂ ria per realitzar de forma sistemĂ tica l'AnĂ lisi Estructural del procĂ©s. Qualsevol procĂ©s pot ser analitzat, nomĂ©s hi ha la restricciĂł de que els models dels components han d'estar disponibles en l'eina FAST. Els processos es descriuen en termes de variables de procĂ©s, components i relacions i l'eina realitza l'anĂ lisi estructural obtenint: (i) la matriu estructural, (ii) el Perfect Matching, (iii) les relacions de redundĂ ncia analĂ­tica, si n'hi ha, i (iv) la matriu signatura de fallades. Per ajudar durant el procĂ©s d'anĂ lisi, FAST pot operar aĂŻlladament en mode de simulaciĂł permetent a l'enginyer de procĂ©s avaluar fallades, la seva detectabilitat i implementar canvis en els components del procĂ©s i la topologia per tal de millorar les capacitats de diagnosi i redundĂ ncia. Per altra banda, FAST pot operar en lĂ­nia connectat al procĂ©s de la planta per mitjĂ  d'una interfĂ­cie OPC. La interfĂ­cie OPC permet la possibilitat de connectar gairebĂ© a qualsevol procĂ©s que inclogui un sistema SCADA per la seva supervisiĂł. Quan funciona en mode en lĂ­nia, el procĂ©s estĂ  monitoritzat per un agent software anomenat l'Agent Supervisor. Addicionalment, FAST tĂ© la capacitat d'implementar FDI de forma distribuĂŻda utilitzant la seva arquitectura multi-agent. L'eina permet dividir sistemes industrials complexes en subsistemes, identificar quines variables de procĂ©s han de ser compartides per cada subsistema i generar una instĂ ncia d'Agent Supervisor per cadascun dels subsistemes identificats. Els Agents Supervisor un cop activats, començaran diagnosticant els components locals i despatxant les peticions de valors per les variables que FAST ha identificat com compartides amb altres agents, per tal d'implementar el procĂ©s FDI de forma distribuĂŻda.Postprint (published version

    International White Book on DER Protection : Review and Testing Procedures

    Get PDF
    This white book provides an insight into the issues surrounding the impact of increasing levels of DER on the generator and network protection and the resulting necessary improvements in protection testing practices. Particular focus is placed on ever increasing inverter-interfaced DER installations and the challenges of utility network integration. This white book should also serve as a starting point for specifying DER protection testing requirements and procedures. A comprehensive review of international DER protection practices, standards and recommendations is presented. This is accompanied by the identiïŹ cation of the main performance challenges related to these protection schemes under varied network operational conditions and the nature of DER generator and interface technologies. Emphasis is placed on the importance of dynamic testing that can only be delivered through laboratory-based platforms such as real-time simulators, integrated substation automation infrastructure and ïŹ‚ exible, inverter-equipped testing microgrids. To this end, the combination of ïŹ‚ exible network operation and new DER technologies underlines the importance of utilising the laboratory testing facilities available within the DERlab Network of Excellence. This not only informs the shaping of new protection testing and network integration practices by end users but also enables the process of de-risking new DER protection technologies. In order to support the issues discussed in the white paper, a comparative case study between UK and German DER protection and scheme testing practices is presented. This also highlights the level of complexity associated with standardisation and approval mechanisms adopted by different countries

    An Approach for the Assessment of System Upset Resilience

    Get PDF
    This report describes an approach for the assessment of upset resilience that is applicable to systems in general, including safety-critical, real-time systems. For this work, resilience is defined as the ability to preserve and restore service availability and integrity under stated conditions of configuration, functional inputs and environmental conditions. To enable a quantitative approach, we define novel system service degradation metrics and propose a new mathematical definition of resilience. These behavioral-level metrics are based on the fundamental service classification criteria of correctness, detectability, symmetry and persistence. This approach consists of a Monte-Carlo-based stimulus injection experiment, on a physical implementation or an error-propagation model of a system, to generate a system response set that can be characterized in terms of dimensional error metrics and integrated to form an overall measure of resilience. We expect this approach to be helpful in gaining insight into the error containment and repair capabilities of systems for a wide range of conditions

    Condition Monitoring of Large-Scale Facilities

    Get PDF
    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers

    An Integrated Approach to Performance Monitoring and Fault Diagnosis of Nuclear Power Systems

    Get PDF
    In this dissertation an integrated framework of process performance monitoring and fault diagnosis was developed for nuclear power systems using robust data driven model based methods, which comprises thermal hydraulic simulation, data driven modeling, identification of model uncertainty, and robust residual generator design for fault detection and isolation. In the applications to nuclear power systems, on the one hand, historical data are often not able to characterize the relationships among process variables because operating setpoints may change and thermal fluid components such as steam generators and heat exchangers may experience degradation. On the other hand, first-principle models always have uncertainty and are often too complicated in terms of model structure to design residual generators for fault diagnosis. Therefore, a realistic fault diagnosis method needs to combine the strength of first principle models in modeling a wide range of anticipated operation conditions and the strength of data driven modeling in feature extraction. In the developed robust data driven model-based approach, the changes in operation conditions are simulated using the first principle models and the model uncertainty is extracted from plant operation data such that the fault effects on process variables can be decoupled from model uncertainty and normal operation changes. It was found that the developed robust fault diagnosis method was able to eliminate false alarms due to model uncertainty and deal with changes in operating conditions throughout the lifetime of nuclear power systems. Multiple methods of robust data driven model based fault diagnosis were developed in this dissertation. A complete procedure based on causal graph theory and data reconciliation method was developed to investigate the causal relationships and the quantitative sensitivities among variables so that sensor placement could be optimized for fault diagnosis in the design phase. Reconstruction based Principal Component Analysis (PCA) approach was applied to deal with both simple faults and complex faults for steady state diagnosis in the context of operation scheduling and maintenance management. A robust PCA model-based method was developed to distinguish the differences between fault effects and model uncertainties. In order to improve the sensitivity of fault detection, a hybrid PCA model based approach was developed to incorporate system knowledge into data driven modeling. Subspace identification was proposed to extract state space models from thermal hydraulic simulations and a robust dynamic residual generator design algorithm was developed for fault diagnosis for the purpose of fault tolerant control and extension to reactor startup and load following operation conditions. The developed robust dynamic residual generator design algorithm is unique in that explicit identification of model uncertainty is not necessary. Finally, it was demonstrated that the developed new methods for the IRIS Helical Coil Steam Generator (HCSG) system. A simulation model was first developed for this system. It was revealed through steady state simulation that the primary coolant temperature profile could be used to indicate the water inventory inside the HCSG tubes. The performance monitoring and fault diagnosis module was then developed to monitor sensor faults, flow distribution abnormality, and heat performance degradation for both steady state and dynamic operation conditions. This dissertation bridges the gap between the theoretical research on computational intelligence and the engineering design in performance monitoring and fault diagnosis for nuclear power systems. The new algorithms have the potential of being integrated into the Generation III and Generation IV nuclear reactor I&C design after they are tested on current nuclear power plants or Generation IV prototype reactors

    QuFI: a Quantum Fault Injector to Measure the Reliability of Qubits and Quantum Circuits

    Get PDF
    Quantum computing is a new technology that is expected to revolutionize the computation paradigm in the next few years. Qubits exploit the quantum physics proprieties to increase the parallelism and speed of computation. Unfortunately, besides being intrinsically noisy, qubits have also been shown to be highly susceptible to external sources of faults, such as ionizing radiation. The latest discoveries highlight a much higher radiation sensitivity of qubits than traditional transistors and identify a much more complex fault model than bit-flip. We propose a framework to identify the quantum circuits sensitivity to radiation-induced faults and the probability for a fault in a qubit to propagate to the output. Based on the latest studies and radiation experiments performed on real quantum machines, we model the transient faults in a qubit as a phase shift with a parametrized magnitude. Additionally, our framework can inject multiple qubit faults, tuning the phase shift magnitude based on the proximity of the qubit to the particle strike location. As we show in the paper, the proposed fault injector is highly flexible, and it can be used on both quantum circuit simulators and real quantum machines. We report the finding of more than 285M injections on the Qiskit simulator and 53K injections on real IBM machines. We consider three quantum algorithms and identify the faults and qubits that are more likely to impact the output. We also consider the fault propagation dependence on the circuit scale, showing that the reliability profile for some quantum algorithms is scale-dependent, with increased impact from radiation-induced faults as we increase the number of qubits. Finally, we also consider multi qubits faults, showing that they are much more critical than single faults. The fault injector and the data presented in this paper are available in a public repository to allow further analysis

    Nankai Trough fault slip behavior analyzed in-situ and in shear experiments

    Get PDF
    The Nankai Trough subduction zone hosts various modes of fault slip from slow to megathrust earthquakes. Slow earthquakes release energy slowly over days to years and can only be recorded geodetically or by borehole observatories. It is not well understood how they connect to regular earthquakes. In contrast, megathrust earthquakes are rapid events that often generate destructive tsunamis, documented for several centuries in the Nankai Trough. Successful earthquake mitigation strategies can only be developed with a better understanding of fault slip behavior and deformation processes within the seismogenic zone and the overlying accretionary prism

    Automated neural network-based instrument validation system

    Get PDF
    In a complex control process, instrument calibration is periodically performed to maintain the instruments within the calibration range, which assures proper control and minimizes down time. Instruments are usually calibrated under out-of-service conditions using manual calibration methods, which may cause incorrect calibration or equipment damage. Continuous in-service calibration monitoring of sensors and instruments will reduce unnecessary instrument calibrations, give operators more confidence in instrument measurements, increase plant efficiency or product quality, and minimize the possibility of equipment damage during unnecessary manual calibrations. In this dissertation, an artificial neural network (ANN)-based instrument calibration verification system is designed to achieve the on-line monitoring and verification goal for scheduling maintenance. Since an ANN is a data-driven model, it can learn the relationships among signals without prior knowledge of the physical model or process, which is usually difficult to establish for the complex hon-linear systems. Furthermore, the ANNs provide a noise-reduced estimate of the signal measurement. More importantly, since a neural network learns the relationships among signals, it can give an unfaulted estimate of a faulty signal based on information provided by other unfaulted signals; that is, provide a correct estimate of a faulty signal. This ANN-based instrument verification system is capable of detecting small degradations or drifts occurring in instrumentation, and preclude false control actions or system damage caused by instrument degradation. In this dissertation, an automated scheme of neural network construction is developed. Previously, the neural network structure design required extensive knowledge of neural networks. An automated design methodology was developed so that a network structure can be created without expert interaction. This validation system was designed to monitor process sensors plant-wide. Due to the large number of sensors to be monitored and the limited computational capability of an artificial neural network model, a variable grouping process was developed for dividing the sensor variables into small correlated groups which the neural networks can handle. A modification of a statistical method, called Beta method, as well as a principal component analysis (PCA)-based method of estimating the number of neural network hidden nodes was developed. Another development in this dissertation is the sensor fault detection method. The commonly used Sequential Probability Ratio Test (SPRT) continuously measures the likelihood ratio to statistically determine if there is any significant calibration change. This method requires normally distributed signals for correct operation. In practice, the signals deviate from the normal distribution causing problems for the SPRT. A modified SPRT (MSPRT) was developed to suppress the possible intermittent alarms initiated by spurious spikes in network prediction errors. These methods were applied to data from the Tennessee Valley Authority (TVA) fossil power plant Unit 9 for testing. The results show that the average detectable drift level is about 2.5% for instruments in the boiler system and about 1% in the turbine system of the Unit 9 system. Approximately 74% of the process instruments can be monitored using the methodologies developed in this dissertation

    Inferring the Evolution of a Large Earthquake from Its Acoustic Impacts on the Ionosphere

    Get PDF
    We investigate the possibility to constrain the evolution of the 2016 M7.8 Kaikoura earthquake evolution based on Global Positioning System signal-derived ionospheric total electron content (TEC) perturbations, that represent plasma responses to infrasonic acoustic waves (IAWs) generated by surface motion. This earthquake exhibited unusual complexity and some first-order aspects of its evolution remain unclear; for example, how and when the Papatea fault (PF) and the corresponding large surface deformation occurred. For various earthquake models, a seismic wave propagation code is used to simulate time-dependent surface deformations, which then excite IAWs in a 3D compressible nonlinear atmospheric model, coupled with a 2D nonlinear multispecies ionospheric plasma dynamic model. Our preferred finite-fault model reproduces the amplitudes, shapes, and time epochs of appearance of detected TEC perturbations well. Additionally, the incorporation of the PF, ruptured during the earthquake, results in the closest agreement between simulated and observed near-zenith vertical TEC perturbations, whereas its absence shows significant discrepancy. This supports the hypothesis that the PF was ruptured during the Kaikoura earthquake. Furthermore, the IAWs and resulting ionospheric plasma disturbances contain additional information on the PF rupture progression, including the timing of initiation and propagation direction, indicating new opportunities to further constrain the PF rupture with low elevation angle “slant” TEC data. The results confirm the ability for TEC measurements to constrain evolutions of large crustal earthquakes to provide new insight beyond traditional seismic and geodetic data sets
    • 

    corecore