92 research outputs found

    Methods and Systems for Fault Diagnosis in Nuclear Power Plants

    Get PDF
    This research mainly deals with fault diagnosis in nuclear power plants (NPP), based on a framework that integrates contributions from fault scope identification, optimal sensor placement, sensor validation, equipment condition monitoring, and diagnostic reasoning based on pattern analysis. The research has a particular focus on applications where data collected from the existing SCADA (supervisory, control, and data acquisition) system is not sufficient for the fault diagnosis system. Specifically, the following methods and systems are developed. A sensor placement model is developed to guide optimal placement of sensors in NPPs. The model includes 1) a method to extract a quantitative fault-sensor incidence matrix for a system; 2) a fault diagnosability criterion based on the degree of singularities of the incidence matrix; and 3) procedures to place additional sensors to meet the diagnosability criterion. Usefulness of the proposed method is demonstrated on a nuclear power plant process control test facility (NPCTF). Experimental results show that three pairs of undiagnosable faults can be effectively distinguished with three additional sensors selected by the proposed model. A wireless sensor network (WSN) is designed and a prototype is implemented on the NPCTF. WSN is an effective tool to collect data for fault diagnosis, especially for systems where additional measurements are needed. The WSN has distributed data processing and information fusion for fault diagnosis. Experimental results on the NPCTF show that the WSN system can be used to diagnose all six fault scenarios considered for the system. A fault diagnosis method based on semi-supervised pattern classification is developed which requires significantly fewer training data than is typically required in existing fault diagnosis models. It is a promising tool for applications in NPPs, where it is usually difficult to obtain training data under fault conditions for a conventional fault diagnosis model. The proposed method has successfully diagnosed nine types of faults physically simulated on the NPCTF. For equipment condition monitoring, a modified S-transform (MST) algorithm is developed by using shaping functions, particularly sigmoid functions, to modify the window width of the existing standard S-transform. The MST can achieve superior time-frequency resolution for applications that involves non-stationary multi-modal signals, where classical methods may fail. Effectiveness of the proposed algorithm is demonstrated using a vibration test system as well as applications to detect a collapsed pipe support in the NPCTF. The experimental results show that by observing changes in time-frequency characteristics of vibration signals, one can effectively detect faults occurred in components of an industrial system. To ensure that a fault diagnosis system does not suffer from erroneous data, a fault detection and isolation (FDI) method based on kernel principal component analysis (KPCA) is extended for sensor validations, where sensor faults are detected and isolated from the reconstruction errors of a KPCA model. The method is validated using measurement data from a physical NPP. The NPCTF is designed and constructed in this research for experimental validations of fault diagnosis methods and systems. Faults can be physically simulated on the NPCTF. In addition, the NPCTF is designed to support systems based on different instrumentation and control technologies such as WSN and distributed control systems. The NPCTF has been successfully utilized to validate the algorithms and WSN system developed in this research. In a real world application, it is seldom the case that one single fault diagnostic scheme can meet all the requirements of a fault diagnostic system in a nuclear power. In fact, the values and performance of the diagnosis system can potentially be enhanced if some of the methods developed in this thesis can be integrated into a suite of diagnostic tools. In such an integrated system, WSN nodes can be used to collect additional data deemed necessary by sensor placement models. These data can be integrated with those from existing SCADA systems for more comprehensive fault diagnosis. An online performance monitoring system monitors the conditions of the equipment and provides key information for the tasks of condition-based maintenance. When a fault is detected, the measured data are subsequently acquired and analyzed by pattern classification models to identify the nature of the fault. By analyzing the symptoms of the fault, root causes of the fault can eventually be identified

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Secure steganography, compression and diagnoses of electrocardiograms in wireless body sensor networks

    Get PDF
    Submission of this completed form results in your thesis/project being lodged online at the RMIT Research Repository. Further information about the RMIT Research Repository is available at http://researchbank.rmit.edu.au Please complete abstract and keywords below for cataloguing and indexing your thesis/project. Abstract (Minimum 200 words, maximum 500 words) The usage of e-health applications is increasing in the modern era. Remote cardiac patients monitoring application is an important example of these e-health applications. Diagnosing cardiac disease in time is of crucial importance to save many patients lives. More than 3.5 million Australians suffer from long-term cardiac diseases. Therefore, in an ideal situation, a continuous cardiac monitoring system should be provided for this large number of patients. However, health-care providers lack the technology required to achieve this objective. Cloud services can be utilized to fill the technology gap for health-care providers. However, three main problems prevent health-care providers from using cloud services. Privacy, performance and accuracy of diagnoses. In this thesis we are addressing these three problems. To provide strong privacy protection services, two steganography techniques are proposed. Both techniques could achieve promising results in terms of security and distortion measurement. The differences between original and resultant watermarked ECG signals were less then 1%. Accordingly, the resultant ECG signal can be still used for diagnoses purposes, and only authorized persons who have the required security information, can extract the hidden secret data in the ECG signal. Consequently, to solve the performance problem of storing huge amount of data concerning ECG into the cloud, two types of compression techniques are introduced: Fractal based lossy compression technique and Gaussian based lossless compression technique. This thesis proves that, fractal models can be efficiently used in ECG lossy compression. Moreover, the proposed fractal technique is a multi-processing ready technique that is suitable to be implemented inside a cloud to make use of its multi processing capability. A high compression ratio could be achieved with low distortion effects. The Gaussian lossless compression technique is proposed to provide a high compression ratio. Moreover, because the compressed files are stored in the cloud, its services should be able to provide automatic diagnosis capability. Therefore, cloud services should be able to diagnose compressed ECG files without undergoing a decompression stage to reduce additional processing overhead. Accordingly, the proposed Gaussian compression provides the ability to diagnose the resultant compressed file. Subsequently, to make use of this homomorphic feature of the proposed Gaussian compression algorithm, in this thesis we have introduced a new diagnoses technique that can be used to detect life-threatening cardiac diseases such as Ventricular Tachycardia and Ventricular Fibrillation. The proposed technique is applied directly to the compressed ECG files without going through the decompression stage. The proposed technique could achieve high accuracy results near to 100% for detecting Ventricular Arrhythmia and 96% for detecting Left Bundle Branch Block. Finally, we believe that in this thesis, the first steps towards encouraging health-care providers to use cloud services have been taken. However, this journey is still long

    Mass Production Processes

    Get PDF
    It is always hard to set manufacturing systems to produce large quantities of standardized parts. Controlling these mass production lines needs deep knowledge, hard experience, and the required related tools as well. The use of modern methods and techniques to produce a large quantity of products within productive manufacturing processes provides improvements in manufacturing costs and product quality. In order to serve these purposes, this book aims to reflect on the advanced manufacturing systems of different alloys in production with related components and automation technologies. Additionally, it focuses on mass production processes designed according to Industry 4.0 considering different kinds of quality and improvement works in mass production systems for high productive and sustainable manufacturing. This book may be interesting to researchers, industrial employees, or any other partners who work for better quality manufacturing at any stage of the mass production processes

    Compressão de imagem médica para arquivos de alto desempenho

    Get PDF
    Information systems and the medical subject are two widespread topics that have interwoven so that medical help could become more efficient. This relation has bred the PACS and the international standard DICOM directed to the organization of digital medical information. The concept of image compression is applied to most images throughout the web. The compression formats used for medical imaging have become outdated. The new formats that have been developed in the past few years are candidates for replacing the old ones in such contexts, possibly enhancing the process. Before they are adopted, an evaluation should be carried out that validates their admissibility. This dissertation reviews the state of the art of medical imaging information systems, namely PACS systems and the DICOM standard. Furthermore, some topics of image compression are covered, such as the metrics for evaluating the algorithms’ performance, finalizing with a survey of four modern formats: JPEG XL, AVIF, and WebP. Two software projects were developed, where the first one carries out an analysis of the formats based on the metrics, using DICOM datasets and producing results that can be used for creating recommendations on the format’s use. The second consists of an application that encodes and decodes medical images with the formats covered in this dissertation. This proof-of-concept works as a medical imaging archive for the storage, distribution, and visualization of compressed data.Os sistemas de informação e o assunto médico são dois temas difundidos que se entrelaçam para que a ajuda médica se torne mais eficiente. Essa relação deu origem ao PACS e ao padrão internacional DICOM direcionado à organização da informação médica digital. O conceito de compressão de imagem é aplicado à maioria das imagens em toda a web. Os formatos de compressão usados para imagens médicas tornaram-se desatualizados. Os novos formatos desenvolvidos nos últimos anos são candidatos a substituir os antigos nesses contextos, possivelmente potencializando o processo. Antes de serem adotados, deve ser realizada uma avaliação que valide sua admissibilidade. Esta dissertação revisa o estado da arte dos sistemas de informação de imagens médicas, nomeadamente os sistemas PACS e a norma DICOM. Além disso, são abordados alguns tópicos de compressão de imagens, como as métricas para avaliação do desempenho dos algoritmos, finalizando com um levantamento de três formatos modernos: JPEG XL, AVIF e WebP. Foram desenvolvidos dois projetos de software, onde o primeiro realiza uma análise dos formatos com base nas métricas, utilizando conjuntos de dados DICOM e produzindo resultados que podem ser utilizados para a criação de recomendações sobre o uso do formato. A segunda consiste numa aplicação capaz de codificar e descodificar imagens médicas com os formatos abordados nesta dissertação. Essa prova de conceito funciona como um arquivo de imagens médicas para armazenamento, distribuição e visualização de dados compactados.Mestrado em Engenharia de Computadores e Telemátic

    Dynamic Modeling, Sensor Placement Design, and Fault Diagnosis of Nuclear Desalination Systems

    Get PDF
    Fault diagnosis of sensors, devices, and equipment is an important topic in the nuclear industry for effective and continuous operation of nuclear power plants. All the fault diagnostic approaches depend critically on the sensors that measure important process variables. Whenever a process encounters a fault, the effect of the fault is propagated to some or all the process variables. The ability of the sensor network to detect and isolate failure modes and anomalous conditions is crucial for the effectiveness of a fault detection and isolation (FDI) system. However, the emphasis of most fault diagnostic approaches found in the literature is primarily on the procedures for performing FDI using a given set of sensors. Little attention has been given to actual sensor allocation for achieving the efficient FDI performance. This dissertation presents a graph-based approach that serves as a solution for the optimization of sensor placement to ensure the observability of faults, as well as the fault resolution to a maximum possible extent. This would potentially facilitate an automated sensor allocation procedure. Principal component analysis (PCA), a multivariate data-driven technique, is used to capture the relationships in the data, and to fit a hyper-plane to the data. The fault directions for different fault scenarios are obtained from the prediction errors, and fault isolation is then accomplished using new projections on these fault directions. The effectiveness of the use of an optimal sensor set versus a reduced set for fault detection and isolation is demonstrated using this technique. Among a variety of desalination technologies, the multi-stage flash (MSF) processes contribute substantially to the desalinating capacity in the world. In this dissertation, both steady-state and dynamic simulation models of a MSF desalination plant are developed. The dynamic MSF model is coupled with a previously developed International Reactor Innovative and Secure (IRIS) model in the SIMULINK environment. The developed sensor placement design and fault diagnostic methods are illustrated with application to the coupled nuclear desalination system. The results demonstrate the effectiveness of the newly developed integrated approach to performance monitoring and fault diagnosis with optimized sensor placement for large industrial systems

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Logging Statements Analysis and Automation in Software Systems with Data Mining and Machine Learning Techniques

    Get PDF
    Log files are widely used to record runtime information of software systems, such as the timestamp of an event, the name or ID of the component that generated the log, and parts of the state of a task execution. The rich information of logs enables system developers (and operators) to monitor the runtime behavior of their systems and further track down system problems in development and production settings. With the ever-increasing scale and complexity of modern computing systems, the volume of logs is rapidly growing. For example, eBay reported that the rate of log generation on their servers is in the order of several petabytes per day in 2018 [17]. Therefore, the traditional way of log analysis that largely relies on manual inspection (e.g., searching for error/warning keywords or grep) has become an inefficient, a labor intensive, error-prone, and outdated task. The growth of the logs has initiated the emergence of automated tools and approaches for log mining and analysis. In parallel, the embedding of logging statements in the source code is a manual and error-prone task, and developers often might forget to add a logging statement in the software's source code. To address the logging challenge, many e orts have aimed to automate logging statements in the source code, and in addition, many tools have been proposed to perform large-scale log le analysis by use of machine learning and data mining techniques. However, the current logging process is yet mostly manual, and thus, proper placement and content of logging statements remain as challenges. To overcome these challenges, methods that aim to automate log placement and content prediction, i.e., `where and what to log', are of high interest. In addition, approaches that can automatically mine and extract insight from large-scale logs are also well sought after. Thus, in this research, we focus on predicting the log statements, and for this purpose, we perform an experimental study on open-source Java projects. We introduce a log-aware code-clone detection method to predict the location and description of logging statements. Additionally, we incorporate natural language processing (NLP) and deep learning methods to further enhance the performance of the log statements' description prediction. We also introduce deep learning based approaches for automated analysis of software logs. In particular, we analyze execution logs and extract natural language characteristics of logs to enable the application of natural language models for automated log le analysis. Then, we propose automated tools for analyzing log files and measuring the information gain from logs for different log analysis tasks such as anomaly detection. We then continue our NLP-enabled approach by leveraging the state-of-the-art language models, i.e., Transformers, to perform automated log parsing
    • …
    corecore