10 research outputs found

    Actividades en control y supervisión inteligente del Grupo de Sistemas Avanzados de Control (SAC) de la Universidad Politècnica de Catalunya (UPC)

    Get PDF
    En esta ponencia se presenta un resumen de las actividades de investigación que se están desarrollando en el grupo de “Sistemas Avanzados de Control” (SAC) de la Universidad Politécnica de Catalunya UPC en el Campus de Terrassa con especial énfasis en el campo del control y la supervisión inteligente. Para ilustrar esta actividad se presentarán tres aplicaciones reales que pueden tener gran interés social y económicoPostprint (published version

    Kernel-based fault diagnosis of inertial sensors using analytical redundancy

    Get PDF
    Kernel methods are able to exploit high-dimensional spaces for representational advantage, while only operating implicitly in such spaces, thus incurring none of the computational cost of doing so. They appear to have the potential to advance the state of the art in control and signal processing applications and are increasingly seeing adoption across these domains. Applications of kernel methods to fault detection and isolation (FDI) have been reported, but few in aerospace research, though they offer a promising way to perform or enhance fault detection. It is mostly in process monitoring, in the chemical processing industry for example, that these techniques have found broader application. This research work explores the use of kernel-based solutions in model-based fault diagnosis for aerospace systems. Specifically, it investigates the application of these techniques to the detection and isolation of IMU/INS sensor faults – a canonical open problem in the aerospace field. Kernel PCA, a kernelised non-linear extension of the well-known principal component analysis (PCA) algorithm, is implemented to tackle IMU fault monitoring. An isolation scheme is extrapolated based on the strong duality known to exist between probably the most widely practiced method of FDI in the aerospace domain – the parity space technique – and linear principal component analysis. The algorithm, termed partial kernel PCA, benefits from the isolation properties of the parity space method as well as the non-linear approximation ability of kernel PCA. Further, a number of unscented non-linear filters for FDI are implemented, equipped with data-driven transition models based on Gaussian processes - a non-parametric Bayesian kernel method. A distributed estimation architecture is proposed, which besides fault diagnosis can contemporaneously perform sensor fusion. It also allows for decoupling faulty sensors from the navigation solution

    Diseño de un método adaptativo de detección temprana de fallas para la reducción de falsas alarmas debidas al envejecimiento de procesos y a la incertidumbre estadística

    Get PDF
    Una nueva técnica de detección de fallas adaptativa basada en PCA es propuesta para la restructuración de modelos estadísticos a través de una actualización recursiva ponderada con la inclusión de nuevos datos disponibles de operación normal de proceso y evidencia de alta tasa de falsas alarmas debidas al envejecimiento natural de procesos. El desempeño de la técnica propuesta es comparado con el de otras técnicas recursivas basadas en PCA que se encuentran en la literatura, teniendo en cuenta la reducción de falsas alarmas y la complejidad computacional. Se una desarrolla una segunda técnica que reduce las falsas alarmas causadas por variaciones leves del proceso (aquellas que no cambian su estructura estadística) y por la incertidumbre natural de los métodos estadísticos, como es el caso de PCA. Dicha técnica consta de un umbral de detección dinámico generado por un algoritmo de filtro mediante la implementación de una ventana móvil a los estadísticos de detección: T^2 y Q. Principales Contribuciones - Desarrollo de una nueva técnica recursiva de detección de fallas basada en PCA para procesos que muestran envejecimiento. - Reducción de la complejidad computacional requerida para la actualización recursiva de modelos de detección de fallas. - Desarrollo de una nueva técnica de detección de fallas basada en PCA con umbral de detección dinámico.MaestríaMagister en Ingeniería Mecánic

    Development of a Data Driven Multiple Observer and Causal Graph Approach for Fault Diagnosis of Nuclear Power Plant Sensors and Field Devices

    Get PDF
    Data driven multiple observer and causal graph approach to fault detection and isolation is developed for nuclear power plant sensors and actuators. It can be integrated into the advanced instrumentation and control system for the next generation nuclear power plants. The developed approach is based on analytical redundancy principle of fault diagnosis. Some analytical models are built to generate the residuals between measured values and expected values. Any significant residuals are used for fault detection and the residual patterns are analyzed for fault isolation. Advanced data driven modeling methods such as Principal Component Analysis and Adaptive Network Fuzzy Inference System are used to achieve on-line accurate and consistent models. As compared with most current data-driven modeling, it is emphasized that the best choice of model structure should be obtained from physical study on a system. Multiple observer approach realizes strong fault isolation through designing appropriate residual structures. Even if one of the residuals is corrupted, the approach is able to indicate an unknown fault instead of a misleading fault. Multiple observers are designed through making full use of the redundant relationships implied in a process when predicting one variable. Data-driven causal graph is developed as a generic approach to fault diagnosis for nuclear power plants where limited fault information is available. It has the potential of combining the reasoning capability of qualitative diagnostic method and the strength of quantitative diagnostic method in fault resolution. A data-driven causal graph consists of individual nodes representing plant variables connected with adaptive quantitative models. With the causal graph, fault detection is fulfilled by monitoring the residual of each model. Fault isolation is achieved by testing the possible assumptions involved in each model. Conservatism is implied in the approach since a faulty sensor or a fault actuator signal is isolated only when their reconstructions can fully explain all the abnormal behavior of the system. The developed approaches have been applied to nuclear steam generator system of a pressurized water reactor and a simulation code has been developed to show its performance. The results show that both single and dual sensor faults and actuator faults can be detected and isolated correctly independent of fault magnitudes and initial power level during early fault transient

    Causal digraph reasoning for fault diagnosis in paper making applications

    Get PDF
    Fault detection and diagnosis systems are required by the process industries because of tightening global competition and the increasing complexity of the processes, which results in the difficulty for operators to perform the diagnosis tasks. Academic research in the field of fault diagnosis has expanded rapidly to meet this demand and successful applications with economic benefits have been reported extensively. As a fault diagnosis method, the causal directed graph method has proved to have considerable advantages in applications with complex processes. The causal digraph method has been subjected to three development phases: signed digraph, fuzzy digraph and dynamic digraph. Being versatile with cause-effect models, the causal digraph method is able to utilize the benefits of both qualitative and quantitative diagnosis methods. However, the latest development, the dynamic causal digraph method, still has certain drawbacks in the case of a process fault. In specific situations, the detection and diagnosis results are not always reliable or sufficient to satisfy the industrial requirements. The aim of this thesis has been to enhance the traditional dynamic causal digraph method by developing a new fault detection approach and a new inference mechanism. The new detection approach produces better detection results and a more complete fault propagation path by taking into account the cancellation phenomenon of the different fault effects. The new inference mechanism is designed to identify possible faulty process components for the type of process fault. The proposed enhanced method has been tested on a generic paper machine simulator and the three-layered board machine simulator of Stora Enso Oyj. In the tests, a number of fault scenarios, sensor faults and process faults were using the proposed method. Finally, a comparison between the proposed method and the traditional method verified the improvements

    An Integrated Approach to Performance Monitoring and Fault Diagnosis of Nuclear Power Systems

    Get PDF
    In this dissertation an integrated framework of process performance monitoring and fault diagnosis was developed for nuclear power systems using robust data driven model based methods, which comprises thermal hydraulic simulation, data driven modeling, identification of model uncertainty, and robust residual generator design for fault detection and isolation. In the applications to nuclear power systems, on the one hand, historical data are often not able to characterize the relationships among process variables because operating setpoints may change and thermal fluid components such as steam generators and heat exchangers may experience degradation. On the other hand, first-principle models always have uncertainty and are often too complicated in terms of model structure to design residual generators for fault diagnosis. Therefore, a realistic fault diagnosis method needs to combine the strength of first principle models in modeling a wide range of anticipated operation conditions and the strength of data driven modeling in feature extraction. In the developed robust data driven model-based approach, the changes in operation conditions are simulated using the first principle models and the model uncertainty is extracted from plant operation data such that the fault effects on process variables can be decoupled from model uncertainty and normal operation changes. It was found that the developed robust fault diagnosis method was able to eliminate false alarms due to model uncertainty and deal with changes in operating conditions throughout the lifetime of nuclear power systems. Multiple methods of robust data driven model based fault diagnosis were developed in this dissertation. A complete procedure based on causal graph theory and data reconciliation method was developed to investigate the causal relationships and the quantitative sensitivities among variables so that sensor placement could be optimized for fault diagnosis in the design phase. Reconstruction based Principal Component Analysis (PCA) approach was applied to deal with both simple faults and complex faults for steady state diagnosis in the context of operation scheduling and maintenance management. A robust PCA model-based method was developed to distinguish the differences between fault effects and model uncertainties. In order to improve the sensitivity of fault detection, a hybrid PCA model based approach was developed to incorporate system knowledge into data driven modeling. Subspace identification was proposed to extract state space models from thermal hydraulic simulations and a robust dynamic residual generator design algorithm was developed for fault diagnosis for the purpose of fault tolerant control and extension to reactor startup and load following operation conditions. The developed robust dynamic residual generator design algorithm is unique in that explicit identification of model uncertainty is not necessary. Finally, it was demonstrated that the developed new methods for the IRIS Helical Coil Steam Generator (HCSG) system. A simulation model was first developed for this system. It was revealed through steady state simulation that the primary coolant temperature profile could be used to indicate the water inventory inside the HCSG tubes. The performance monitoring and fault diagnosis module was then developed to monitor sensor faults, flow distribution abnormality, and heat performance degradation for both steady state and dynamic operation conditions. This dissertation bridges the gap between the theoretical research on computational intelligence and the engineering design in performance monitoring and fault diagnosis for nuclear power systems. The new algorithms have the potential of being integrated into the Generation III and Generation IV nuclear reactor I&C design after they are tested on current nuclear power plants or Generation IV prototype reactors

    An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System

    Get PDF
    To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable grab bag of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold. To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours

    Inverse modelling requirements for a nuclear materials safeguards tool

    Get PDF
    The work presented in this thesis has been carried out in the support of the specification of a solution monitoring system to assist United Nations' inspectors performing nuclear materials, primarily pertaining to the chemical separation areas of nuclear reprocessing facilities. The system is designed to provide assurances over hours and days, other methods are more appropriate for the provision of assurances over weeks. The impetus for this system derives from the fact that conventional material accountancy methods are unable to satisfy the protracted loss detection goal specified by the International Atomic Energy Agency when applied to large commercial reprocessing plants. Based on the concept of model-based reasoning, the system estimates the distribution of plutonium throughout the plant via simulation, and then attempts to justify any discrepancies between the estimated distribution and the observed distribution. Because the simulation's structure is fixed the process of justification involves hypothesising additional forcing functions and parameter changes, which result in the simulation predicting that observed. The simulation inputs are largely in the form of flow rates and concentrations, which are obtained via indirect measurement. Plant operators discourage invasive measurement systems on the grounds of the expense of maintenance and plant containment. For this reason the direct measurement of material flow rates is not possible. However, the volume and density of liquor in process tanks is measured, so it is possible to obtain the flow rates indirectly by analysing the measurements, a process known as inverse modelling. Concentration measurements are obtained from the laboratory analysis of samples. Inverse modelling is not just confined to flow rate estimation, because one of the aims of the system is also one of inverse modelling: to hypothesis a set of forcing functions and boundary conditions which, when input into the simulation, predicts the observed distribution. Thus inverse modelling is required at two levels, locally for flow rate estimation and more globally for distribution estimation over the entire plant. Inverse modelling is problematic because inverse solutions have a propensity to be non-unique and unstable. Furthermore, since the solutions are obtained by analysing the measurements, they are adversely affected by the presence of noise and/or biases. This thesis describes some of the tools that have been developed as part of this system. A number are based on common statistical process control algorithms such as the Shewhart Control Chart and the V-mask, others involve more novel algorithms such as simulated annealing. Different tools are used over different time-scales: the short-term and the medium-term Over the short-term, disagreements between the simulation and observations are analysed to generate forcing function hypotheses by using banks of observers to generate a list of the possible causes. The most likely hypothesis is chosen on the basis of user specified subjective possibilities. These probabilities reflect the view that some events are more likely to be acceptable to the operator than others are. The problem over the medium-term is more difficult. The inverse modelling process is imperfect so the model diverges from the real plant over time with the net effect that quantities of material are predicted to be in the wrong place. This imperfection can stem from both the simulation and the plant data. The possible causes are biases that may exist on the plant and inaccuracies in the estimation of flow rates that affect the simulation. A method is proposed for identifying and estimating the gross multiplicative biases. If no bias is found an event is created describing the redistribution necessary to achieve parity. A method is proposed to correct flow rates with the net effect that is a redistribution that would minimise the divergence. If a large redistribution is necessary to achieve parity, then an incident may have occurred on the plant. The emphasis in the design of the algorithms is on the development of a practical system, one that could easily be adapted for use on a real plant. A number of different activities were needed to convert the conceptual design into a practical additional safeguards system A considerable amount of work has been spent designing and testing on real data virtually identical algorithms. This activity is not central to the work described in this thesis, and has been relegated to Appendix 1. However, it is evidence of the credibility of the algorithms on their ability to work in a real situation, and cannot be stressed too much. (Abstract shortened by ProQuest.)

    NERI PROJECT 99-119. TASK 2. DATA-DRIVEN PREDICTION OF PROCESS VARIABLES. FINAL REPORT

    Full text link
    corecore