5,279 research outputs found

    Nonlinear data driven techniques for process monitoring

    Get PDF
    The goal of this research is to develop process monitoring technology capable of taking advantage of the large stores of data accumulating in modern chemical plants. There is demand for new techniques for the monitoring of non-linear topology and behavior, and this research presents a topological preservation method for process monitoring using Self Organizing Maps (SOM). The novel architecture presented adapts SOM to a full spectrum of process monitoring tasks including fault detection, fault identification, fault diagnosis, and soft sensing. The key innovation of the new technique is its use of multiple SOM (MSOM) in the data modeling process as well as the use of a Gaussian Mixture Model (GMM) to model the probability density function of classes of data. For comparison, a linear process monitoring technique based on Principal Component Analysis (PCA) is also used to demonstrate the improvements SOM offers. Data for the computational experiments was generated using a simulation of the Tennessee Eastman process (TEP) created in Simulink by (Ricker 1996). Previous studies focus on step changes from normal operations, but this work adds operating regimes with time dependent dynamics not previously considered with a SOM. Results show that MSOM improves upon both linear PCA as well as the standard SOM technique using one map for fault diagnosis, and also shows a superior ability to isolate which variables in the data are responsible for the faulty condition. With respect to soft sensing, SOM and MSOM modeled the compositions equally well, showing that no information was lost in dividing the map representation of process data. Future research will attempt to validate the technique on a real chemical process

    Design of a Multi-Agent System for Process Monitoring and Supervision

    Get PDF
    New process monitoring and control strategies are developing every day together with process automation strategies to satisfy the needs of diverse industries. New automation systems are being developed with more capabilities for safety and reliability issues. Fault detection and diagnosis, and process monitoring and supervision are some of the new and promising growth areas in process control. With the help of the development of powerful computer systems, the extensive amount of process data from all over the plant can be put to use in an efficient manner by storing and manipulation. With this development, data-driven process monitoring approaches had the chance to emerge compared to model-based process monitoring approaches, where the quantitative model is known as a priori knowledge. Therefore, the objective of this research is to layout the basis for designing and implementing a multi-agent system for process monitoring and supervision. The agent-based programming approach adopted in our research provides a number of advantages, such as, flexibility, adaptation and ease of use. In its current status, the designed multi-agent system architecture has the three different functionalities ready for use for process monitoring and supervision. It allows: a) easy manipulation and preprocessing of plant data both for training and online application; b) detection of process faults; and c) diagnosis of the source of the fault. In addition, a number of alternative data driven techniques were implemented to perform monitoring and supervision tasks: Principal Component Analysis (PCA), Fisher Discriminant Analysis (FDA), and Self-Organizing Maps (SOM). The process system designed in this research project is generic in the sense that it can be used for multiple applications. The process monitoring system is successfully tested with Tennessee Eastman Process application. Fault detection rates and fault diagnosis rates are compared amongst PCA, FDA, and SOM for different faults using the proposed framework

    Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications

    Get PDF
    The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version

    Chapter A Framework for Learning System for Complex Industrial Processes

    Get PDF
    Due to the intense price-based global competition, rising operating cost, rapidly changing economic conditions and stringent environmental regulations, modern process and energy industries are confronting unprecedented challenges to maintain profitability. Therefore, improving the product quality and process efficiency while reducing the production cost and plant downtime are matters of utmost importance. These objectives are somewhat counteracting, and to satisfy them, optimal operation and control of the plant components are essential. Use of optimization not only improves the control and monitoring of assets, but also offers better coordination among different assets. Thus, it can lead to extensive savings in the energy and resource consumption, and consequently offer reduction in operational costs, by offering better control, diagnostics and decision support. This is one of the main driving forces behind developing new methods, tools and frameworks. In this chapter, a generic learning system architecture is presented that can be retrofitted to existing automation platforms of different industrial plants. The architecture offers flexibility and modularity, so that relevant functionalities can be selected for a specific plant on an as-needed basis. Various functionalities such as soft-sensors, outputs prediction, model adaptation, control optimization, anomaly detection, diagnostics and decision supports are discussed in detail

    A collaborative, multi-agent based methodology for abnormal events management

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Process Analysis for Material Flow Systems

    Get PDF
    This paper describes a generic approach for analysis of internal behavior of logistic systems based on event logs. The approach is demonstrated by an example of event data from the simulation model of an automated material handling system (MHS) in a manufacturing company. The purpose of the analysis is the identification of design and operation problems and their causes, prospectively. As a result, the simulation model developer obtains the condensed and ranked information on events. These events describe the internal system behavior with anomalies pointing at either possible problems or capacity reserves

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA
    corecore