13,218 research outputs found

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested

    Self-tuning routine alarm analysis of vibration signals in steam turbine generators

    Get PDF
    This paper presents a self-tuning framework for knowledge-based diagnosis of routine alarms in steam turbine generators. The techniques provide a novel basis for initialising and updating time series feature extraction parameters used in the automated decision support of vibration events due to operational transients. The data-driven nature of the algorithms allows for machine specific characteristics of individual turbines to be learned and reasoned about. The paper provides a case study illustrating the routine alarm paradigm and the applicability of systems using such techniques

    An Integrated Approach to Performance Monitoring and Fault Diagnosis of Nuclear Power Systems

    Get PDF
    In this dissertation an integrated framework of process performance monitoring and fault diagnosis was developed for nuclear power systems using robust data driven model based methods, which comprises thermal hydraulic simulation, data driven modeling, identification of model uncertainty, and robust residual generator design for fault detection and isolation. In the applications to nuclear power systems, on the one hand, historical data are often not able to characterize the relationships among process variables because operating setpoints may change and thermal fluid components such as steam generators and heat exchangers may experience degradation. On the other hand, first-principle models always have uncertainty and are often too complicated in terms of model structure to design residual generators for fault diagnosis. Therefore, a realistic fault diagnosis method needs to combine the strength of first principle models in modeling a wide range of anticipated operation conditions and the strength of data driven modeling in feature extraction. In the developed robust data driven model-based approach, the changes in operation conditions are simulated using the first principle models and the model uncertainty is extracted from plant operation data such that the fault effects on process variables can be decoupled from model uncertainty and normal operation changes. It was found that the developed robust fault diagnosis method was able to eliminate false alarms due to model uncertainty and deal with changes in operating conditions throughout the lifetime of nuclear power systems. Multiple methods of robust data driven model based fault diagnosis were developed in this dissertation. A complete procedure based on causal graph theory and data reconciliation method was developed to investigate the causal relationships and the quantitative sensitivities among variables so that sensor placement could be optimized for fault diagnosis in the design phase. Reconstruction based Principal Component Analysis (PCA) approach was applied to deal with both simple faults and complex faults for steady state diagnosis in the context of operation scheduling and maintenance management. A robust PCA model-based method was developed to distinguish the differences between fault effects and model uncertainties. In order to improve the sensitivity of fault detection, a hybrid PCA model based approach was developed to incorporate system knowledge into data driven modeling. Subspace identification was proposed to extract state space models from thermal hydraulic simulations and a robust dynamic residual generator design algorithm was developed for fault diagnosis for the purpose of fault tolerant control and extension to reactor startup and load following operation conditions. The developed robust dynamic residual generator design algorithm is unique in that explicit identification of model uncertainty is not necessary. Finally, it was demonstrated that the developed new methods for the IRIS Helical Coil Steam Generator (HCSG) system. A simulation model was first developed for this system. It was revealed through steady state simulation that the primary coolant temperature profile could be used to indicate the water inventory inside the HCSG tubes. The performance monitoring and fault diagnosis module was then developed to monitor sensor faults, flow distribution abnormality, and heat performance degradation for both steady state and dynamic operation conditions. This dissertation bridges the gap between the theoretical research on computational intelligence and the engineering design in performance monitoring and fault diagnosis for nuclear power systems. The new algorithms have the potential of being integrated into the Generation III and Generation IV nuclear reactor I&C design after they are tested on current nuclear power plants or Generation IV prototype reactors

    Review of recent research towards power cable life cycle management

    Get PDF
    Power cables are integral to modern urban power transmission and distribution systems. For power cable asset managers worldwide, a major challenge is how to manage effectively the expensive and vast network of cables, many of which are approaching, or have past, their design life. This study provides an in-depth review of recent research and development in cable failure analysis, condition monitoring and diagnosis, life assessment methods, fault location, and optimisation of maintenance and replacement strategies. These topics are essential to cable life cycle management (LCM), which aims to maximise the operational value of cable assets and is now being implemented in many power utility companies. The review expands on material presented at the 2015 JiCable conference and incorporates other recent publications. The review concludes that the full potential of cable condition monitoring, condition and life assessment has not fully realised. It is proposed that a combination of physics-based life modelling and statistical approaches, giving consideration to practical condition monitoring results and insulation response to in-service stress factors and short term stresses, such as water ingress, mechanical damage and imperfections left from manufacturing and installation processes, will be key to success in improved LCM of the vast amount of cable assets around the world

    Neural methods in process monitoring, visualization and early fault detection

    Get PDF
    This technical report is based on five our recent articles: ”Self-organizing map based visualization techniques and their assessment”, ”Combining neural methods and knowledge-based methods in accident management”, ”Abnormal process state detection by cluster center point monitoring in BWR nuclear power plant”, “Generated control limits as a basis of operator-friendly process monitoring”, and “Modelling power output at nuclear power plant by neural networks”. Neural methods are applied in process monitoring, visualization and early fault detection. We introduce decision support schemes based on Self-Organizing Map (SOM) combined with other methods. Visualizations based on various data-analysis methods are developed in large Finnish research project many Universities and industrial partners participating. In our subproject the industrial partner providing data into our practical examples is Teollisuuden Voima Oy, Olkiluoto Nuclear power plant. Measurement of the information value is one challenging issue. On long run our research has moved from Accident Management to more Failure Management. One interesting case example introduced is detecting pressure drift of the boiling water reactor by multivariate methods including innovative visualizations. We also present two different neural network approaches for industrial process signal forecasting. Preprosessing suitable input signals and delay analysis are important phases in modelling. Optimized number of delayed input signals and neurons in hidden-layer are found to make a possible prediction of an idle power process signal. Algorithms on input selection and finding the optimal model for one-step-ahead prediction are developed. We introduce a method to detect abnormal process state based on cluster center point monitoring in time. Typical statistical features are extracted, mapped to n-dimensional space, and clustered online for every step. The process signals in the constant time window are classified into two clusters by the K-means method. In addition to monitoring features of the process signals, signal trends and alarm lists, a tool is got that helps in early detection of the pre-stage of a process fault. We also introduce data generated control limits, where alarm balance feature clarifies the monitoring. This helps in early and accurate fault detection

    Data-Driven Machine Learning for Fault Detection and Diagnosis in Nuclear Power Plants: A Review

    Get PDF
    Data-driven machine learning (DDML) methods for the fault diagnosis and detection (FDD) in the nuclear power plant (NPP) are of emerging interest in the recent years. However, there still lacks research on comprehensive reviewing the state-of-the-art progress on the DDML for the FDD in the NPP. In this review, the classifications, principles, and characteristics of the DDML are firstly introduced, which include the supervised learning type, unsupervised learning type, and so on. Then, the latest applications of the DDML for the FDD, which consist of the reactor system, reactor component, and reactor condition monitoring are illustrated, which can better predict the NPP behaviors. Lastly, the future development of the DDML for the FDD in the NPP is concluded

    A Review of Prognostics and Health Management Applications in Nuclear Power Plants

    Get PDF
    The US operating fleet of light water reactors (LWRs) is currently undergoing life extensions from the original 40-year license to 60 years of operation. In the US, 74 reactors have been approved for the first round license extension, and 19 additional applications are currently under review. Safe and economic operation of these plants beyond 60 years is now being considered in anticipation of a second round of license extensions to 80 years of operation.Greater situational awareness of key systems, structures, and components (SSCs) can provide the technical basis for extending the life of SSCs beyond the original design life and supports improvements in both safety and economics by supporting optimized maintenance planning and power uprates. These issues are not specific to the aging LWRs; future reactors (including Generation III+ LWRs, advanced reactors, small modular reactors, and fast reactors) can benefit from the same situational awareness. In fact, many SMR and advanced reactor designs have increased operating cycles (typically four years up to forty years), which reduce the opportunities for inspection and maintenance at frequent, scheduled outages. Understanding of the current condition of key equipment and the expected evolution of degradation during the next operating cycle allows for targeted inspection and maintenance activities. This article reviews the state of the art and the state of practice of prognostics and health management (PHM) for nuclear power systems. Key research needs and technical gaps are highlighted that must be addressed in order to fully realize the benefits of PHM in nuclear facilities

    Development of a Data Driven Multiple Observer and Causal Graph Approach for Fault Diagnosis of Nuclear Power Plant Sensors and Field Devices

    Get PDF
    Data driven multiple observer and causal graph approach to fault detection and isolation is developed for nuclear power plant sensors and actuators. It can be integrated into the advanced instrumentation and control system for the next generation nuclear power plants. The developed approach is based on analytical redundancy principle of fault diagnosis. Some analytical models are built to generate the residuals between measured values and expected values. Any significant residuals are used for fault detection and the residual patterns are analyzed for fault isolation. Advanced data driven modeling methods such as Principal Component Analysis and Adaptive Network Fuzzy Inference System are used to achieve on-line accurate and consistent models. As compared with most current data-driven modeling, it is emphasized that the best choice of model structure should be obtained from physical study on a system. Multiple observer approach realizes strong fault isolation through designing appropriate residual structures. Even if one of the residuals is corrupted, the approach is able to indicate an unknown fault instead of a misleading fault. Multiple observers are designed through making full use of the redundant relationships implied in a process when predicting one variable. Data-driven causal graph is developed as a generic approach to fault diagnosis for nuclear power plants where limited fault information is available. It has the potential of combining the reasoning capability of qualitative diagnostic method and the strength of quantitative diagnostic method in fault resolution. A data-driven causal graph consists of individual nodes representing plant variables connected with adaptive quantitative models. With the causal graph, fault detection is fulfilled by monitoring the residual of each model. Fault isolation is achieved by testing the possible assumptions involved in each model. Conservatism is implied in the approach since a faulty sensor or a fault actuator signal is isolated only when their reconstructions can fully explain all the abnormal behavior of the system. The developed approaches have been applied to nuclear steam generator system of a pressurized water reactor and a simulation code has been developed to show its performance. The results show that both single and dual sensor faults and actuator faults can be detected and isolated correctly independent of fault magnitudes and initial power level during early fault transient

    Inferential Modeling and Independent Component Analysis for Redundant Sensor Validation

    Get PDF
    The calibration of redundant safety critical sensors in nuclear power plants is a manual task that consumes valuable time and resources. Automated, data-driven techniques, to monitor the calibration of redundant sensors have been developed over the last two decades, but have not been fully implemented. Parity space methods such as the Instrumentation and Calibration Monitoring Program (ICMP) method developed by Electric Power Research Institute and other empirical based inferential modeling techniques have been developed but have not become viable options. Existing solutions to the redundant sensor validation problem have several major flaws that restrict their applications. Parity space method, such as ICMP, are not robust for low redundancy conditions and their operation becomes invalid when there are only two redundant sensors. Empirical based inferential modeling is only valid when intrinsic correlations between predictor variables and response variables remain static during the model training and testing phase. They also commonly produce high variance results and are not the optimal solution to the problem. This dissertation develops and implements independent component analysis (ICA) for redundant sensor validation. Performance of the ICA algorithm produces sufficiently low residual variance parameter estimates when compared to simple averaging, ICMP, and principal component regression (PCR) techniques. For stationary signals, it can detect and isolate sensor drifts for as few as two redundant sensors. It is fast and can be embedded into a real-time system. This is demonstrated on a water level control system. Additionally, ICA has been merged with inferential modeling technique such as PCR to reduce the prediction error and spillover effects from data anomalies. ICA is easy to use with, only the window size needing specification. The effectiveness and robustness of the ICA technique is shown through the use of actual nuclear power plant data. A bootstrap technique is used to estimate the prediction uncertainties and validate its usefulness. Bootstrap uncertainty estimates incorporate uncertainties from both data and the model. Thus, the uncertainty estimation is robust and varies from data set to data set. The ICA based system is proven to be accurate and robust; however, classical ICA algorithms commonly fail when distributions are multi-modal. This most likely occurs during highly non-stationary transients. This research also developed a unity check technique which indicates such failures and applies other, more robust techniques during transients. For linear trending signals, a rotation transform is found useful while standard averaging techniques are used during general transients
    corecore