10,784 research outputs found

    Hydroelectric power plant management relying on neural networks and expert system integration

    Get PDF
    The use of Neural Networks (NN) is a novel approach that can help in taking decisions when integrated in a more general system, in particular with expert systems. In this paper, an architecture for the management of hydroelectric power plants is introduced. This relies on monitoring a large number of signals, representing the technical parameters of the real plant. The general architecture is composed of an Expert System and two NN modules: Acoustic Prediction (NNAP) and Predictive Maintenance (NNPM). The NNAP is based on Kohonen Learning Vector Quantization (LVQ) Networks in order to distinguish the sounds emitted by electricity-generating machine groups. The NNPM uses an ART-MAP to identify different situations from the plant state variables, in order to prevent future malfunctions. In addition, a special process to generate a complete training set has been designed for the ART-MAP module. This process has been developed to deal with the absence of data about abnormal plant situations, and is based on neural nets trained with the backpropagation algorithm.Publicad

    Consistency Index-Based Sensor Fault Detection System for Nuclear Power Plant Emergency Situations Using an LSTM Network

    Get PDF
    A nuclear power plant (NPP) consists of an enormous number of components with complex interconnections. Various techniques to detect sensor errors have been developed to monitor the state of the sensors during normal NPP operation, but not for emergency situations. In an emergency situation with a reactor trip, all the plant parameters undergo drastic changes following the sudden decrease in core reactivity. In this paper, a machine learning model adopting a consistency index is suggested for sensor error detection during NPP emergency situations. The proposed consistency index refers to the soundness of the sensors based on their measurement accuracy. The application of consistency index labeling makes it possible to detect sensor error immediately and specify the particular sensor where the error occurred. From a compact nuclear simulator, selected plant parameters were extracted during typical emergency situations, and artificial sensor errors were injected into the raw data. The trained system successfully generated output that gave both sensor error states and error-free states

    On the role of pre and post-processing in environmental data mining

    Get PDF
    The quality of discovered knowledge is highly depending on data quality. Unfortunately real data use to contain noise, uncertainty, errors, redundancies or even irrelevant information. The more complex is the reality to be analyzed, the higher the risk of getting low quality data. Knowledge Discovery from Databases (KDD) offers a global framework to prepare data in the right form to perform correct analyses. On the other hand, the quality of decisions taken upon KDD results, depend not only on the quality of the results themselves, but on the capacity of the system to communicate those results in an understandable form. Environmental systems are particularly complex and environmental users particularly require clarity in their results. In this paper some details about how this can be achieved are provided. The role of the pre and post processing in the whole process of Knowledge Discovery in environmental systems is discussed

    Anomaly Detection using Autoencoders in High Performance Computing Systems

    Full text link
    Anomaly detection in supercomputers is a very difficult problem due to the big scale of the systems and the high number of components. The current state of the art for automated anomaly detection employs Machine Learning methods or statistical regression models in a supervised fashion, meaning that the detection tool is trained to distinguish among a fixed set of behaviour classes (healthy and unhealthy states). We propose a novel approach for anomaly detection in High Performance Computing systems based on a Machine (Deep) Learning technique, namely a type of neural network called autoencoder. The key idea is to train a set of autoencoders to learn the normal (healthy) behaviour of the supercomputer nodes and, after training, use them to identify abnormal conditions. This is different from previous approaches which where based on learning the abnormal condition, for which there are much smaller datasets (since it is very hard to identify them to begin with). We test our approach on a real supercomputer equipped with a fine-grained, scalable monitoring infrastructure that can provide large amount of data to characterize the system behaviour. The results are extremely promising: after the training phase to learn the normal system behaviour, our method is capable of detecting anomalies that have never been seen before with a very good accuracy (values ranging between 88% and 96%).Comment: 9 pages, 3 figure

    Application of Artificial Intelligence in Detection and Mitigation of Human Factor Errors in Nuclear Power Plants: A Review

    Get PDF
    Human factors and ergonomics have played an essential role in increasing the safety and performance of operators in the nuclear energy industry. In this critical review, we examine how artificial intelligence (AI) technologies can be leveraged to mitigate human errors, thereby improving the safety and performance of operators in nuclear power plants (NPPs). First, we discuss the various causes of human errors in NPPs. Next, we examine the ways in which AI has been introduced to and incorporated into different types of operator support systems to mitigate these human errors. We specifically examine (1) operator support systems, including decision support systems, (2) sensor fault detection systems, (3) operation validation systems, (4) operator monitoring systems, (5) autonomous control systems, (6) predictive maintenance systems, (7) automated text analysis systems, and (8) safety assessment systems. Finally, we provide some of the shortcomings of the existing AI technologies and discuss the challenges still ahead for their further adoption and implementation to provide future research directions

    A Process to Implement an Artificial Neural Network and Association Rules Techniques to Improve Asset Performance and Energy Efficiency

    Get PDF
    In this paper, we address the problem of asset performance monitoring, with the intention of both detecting any potential reliability problem and predicting any loss of energy consumption e ciency. This is an important concern for many industries and utilities with very intensive capitalization in very long-lasting assets. To overcome this problem, in this paper we propose an approach to combine an Artificial Neural Network (ANN) with Data Mining (DM) tools, specifically with Association Rule (AR) Mining. The combination of these two techniques can now be done using software which can handle large volumes of data (big data), but the process still needs to ensure that the required amount of data will be available during the assets’ life cycle and that its quality is acceptable. The combination of these two techniques in the proposed sequence di ers from previous works found in the literature, giving researchers new options to face the problem. Practical implementation of the proposed approach may lead to novel predictive maintenance models (emerging predictive analytics) that may detect with unprecedented precision any asset’s lack of performance and help manage assets’ O&M accordingly. The approach is illustrated using specific examples where asset performance monitoring is rather complex under normal operational conditions.Ministerio de Economía y Competitividad DPI2015-70842-

    Advancements In Crowd-Monitoring System: A Comprehensive Analysis of Systematic Approaches and Automation Algorithms: State-of-The-Art

    Full text link
    Growing apprehensions surrounding public safety have captured the attention of numerous governments and security agencies across the globe. These entities are increasingly acknowledging the imperative need for reliable and secure crowd-monitoring systems to address these concerns. Effectively managing human gatherings necessitates proactive measures to prevent unforeseen events or complications, ensuring a safe and well-coordinated environment. The scarcity of research focusing on crowd monitoring systems and their security implications has given rise to a burgeoning area of investigation, exploring potential approaches to safeguard human congregations effectively. Crowd monitoring systems depend on a bifurcated approach, encompassing vision-based and non-vision-based technologies. An in-depth analysis of these two methodologies will be conducted in this research. The efficacy of these approaches is contingent upon the specific environment and temporal context in which they are deployed, as they each offer distinct advantages. This paper endeavors to present an in-depth analysis of the recent incorporation of artificial intelligence (AI) algorithms and models into automated systems, emphasizing their contemporary applications and effectiveness in various contexts
    corecore