1,613 research outputs found

    AI-enabled modeling and monitoring of data-rich advanced manufacturing systems

    Get PDF
    The infrastructure of cyber-physical systems (CPS) is based on a meta-concept of cybermanufacturing systems (CMS) that synchronizes the Industrial Internet of Things (IIoTs), Cloud Computing, Industrial Control Systems (ICSs), and Big Data analytics in manufacturing operations. Artificial Intelligence (AI) can be incorporated to make intelligent decisions in the day-to-day operations of CMS. Cyberattack spaces in AI-based cybermanufacturing operations pose significant challenges, including unauthorized modification of systems, loss of historical data, destructive malware, software malfunctioning, etc. However, a cybersecurity framework can be implemented to prevent unauthorized access, theft, damage, or other harmful attacks on electronic equipment, networks, and sensitive data. The five main cybersecurity framework steps are divided into procedures and countermeasure efforts, including identifying, protecting, detecting, responding, and recovering. Given the major challenges in AI-enabled cybermanufacturing systems, three research objectives are proposed in this dissertation by incorporating cybersecurity frameworks. The first research aims to detect the in-situ additive manufacturing (AM) process authentication problem using high-volume video streaming data. A side-channel monitoring approach based on an in-situ optical imaging system is established, and a tensor-based layer-wise texture descriptor is constructed to describe the observed printing path. Subsequently, multilinear principal component analysis (MPCA) is leveraged to reduce the dimension of the tensor-based texture descriptor, and low-dimensional features can be extracted for detecting attack-induced alterations. The second research work seeks to address the high-volume data stream problems in multi-channel sensor fusion for diverse bearing fault diagnosis. This second approach proposes a new multi-channel sensor fusion method by integrating acoustics and vibration signals with different sampling rates and limited training data. The frequency-domain tensor is decomposed by MPCA, resulting in low-dimensional process features for diverse bearing fault diagnosis by incorporating a Neural Network classifier. By linking the second proposed method, the third research endeavor is aligned to recovery systems of multi-channel sensing signals when a substantial amount of missing data exists due to sensor malfunction or transmission issues. This study has leveraged a fully Bayesian CANDECOMP/PARAFAC (FBCP) factorization method that enables to capture of multi-linear interaction (channels × signals) among latent factors of sensor signals and imputes missing entries based on observed signals

    Advanced Fault Diagnosis and Health Monitoring Techniques for Complex Engineering Systems

    Get PDF
    Over the last few decades, the field of fault diagnostics and structural health management has been experiencing rapid developments. The reliability, availability, and safety of engineering systems can be significantly improved by implementing multifaceted strategies of in situ diagnostics and prognostics. With the development of intelligence algorithms, smart sensors, and advanced data collection and modeling techniques, this challenging research area has been receiving ever-increasing attention in both fundamental research and engineering applications. This has been strongly supported by the extensive applications ranging from aerospace, automotive, transport, manufacturing, and processing industries to defense and infrastructure industries

    Monitoring, fault detection and estimation in processes using multivariate statistical

    Get PDF
    Multivariate statistical techniques are one of the most widely used approaches in data driven monitoring and fault detection schemes in industrial processes. Concretely, principal component analysis (PCA) has been applied to many complex systems with good results. Nevertheless, the PCA-based fault detection and isolation approaches present some problems in the monitoring of processes with different operating modes and in the identification of the fault root in the fault isolation phase. PCA uses historical databases to build empirical models. The models obtained are able to describe the system¿s trend. PCA models extract useful information from the historical data. This extraction is based on the calculation of the relationship between the measured variables. When a fault appears, it can change the covariance structure captured, and this situation can be detected using different control charts. Another widely used multivariate statistical technique is partial least squares regression (PLS). PLS has also been applied as a data driven fault detection and isolation method. Moreover, this type of methods has been used as estimation techniques in soft sensor design. PLS is a regression method based on principal components. The main goal of this Thesis deals with the monitoring, fault detection and isolation and estimation methods in processes based on multivariate statistical techniques such as principal component analysis and partial least squares. The main contributions of this work can be arranged in the three following topics: ¿ The first topic is related with the monitoring of continuous processes. When a process operates in several operating modes, the classical PCA approach is not the most suitable method. In this work, an approach for monitoring the whole behaviour of a process, taking into account the different operating modes and transient states, is presented. The monitoring of transient states and start-ups is studied in detail. Also, the continuous processes which do not operate in a strict steady state are monitored in a similar way to the transient states. ¿ The second topic is related with the combination of model-based structural model decomposition techniques and principal component analysis. Concretely, the possible conflicts (PCs) approach is applied. PCs compute subsystems within a system model as minimal subsets of equations with an analytical redundancy property to detect and isolate faults. The residuals obtained with this method can be useful to perform a complete fault isolation procedure. These residuals are monitored using a PCA model in order to simplify and improve the fault detection task. ¿ The third topic addresses the estimation task in soft sensor design. In this case, the soft sensors of a real process are studied and improved using neural networks and multivariate statistical techniques. In this case, a dry substance (DS) content sensor based on indirect measurements is replaced by a neural network-based sensor. This type of sensors take into account more variables of the process and obtain more robust and accurate estimations. Moreover, this sensor can be improved using a PCA layer at the network input in order to reduce the number of inputs in the network. Also, a PLS-based sensor is designed in this topic. It also improves the sensor based on indirect measurements. Finally, the different approaches developed in this work have been applied to several process plants. Concretely, a two-communicated tanks system, the evaporation section of a sugar factory and a reverse osmosis desalination plant are the systems used in this dissertationDepartamento de Ingenieria de Sistemas y Automátic

    Validating a neural network-based online adaptive system

    Get PDF
    Neural networks are popular models used for online adaptation to accommodate system faults and recuperate against environmental changes in real-time automation and control applications. However, the adaptivity limits the applicability of conventional verification and validation (V&V) techniques to such systems. We investigated the V&V of neural network-based online adaptive systems and developed a novel validation approach consisting of two important methods. (1) An independent novelty detector at the system input layer detects failure conditions and tracks abnormal events/data that may cause unstable learning behavior. (2) At the system output layer, we perform a validity check on the network predictions to validate its accommodation performance.;Our research focuses on the Intelligent Flight Control System (IFCS) for NASA F-15 aircraft as an example of online adaptive control application. We utilized Support Vector Data Description (SVDD), a one-class classifier to examine the data entering the adaptive component and detect potential failures. We developed a decompose and combine strategy to drastically reduce its computational cost, from O(n 3) down to O( n32 log n) such that the novelty detector becomes feasible in real-time.;We define a confidence measure, the validity index, to validate the predictions of the Dynamic Cell Structure (DCS) network in IFCS. The statistical information is collected during adaptation. The validity index is computed to reflect the trustworthiness associated with each neural network output. The computation of validity index in DCS is straightforward and efficient.;Through experimentation with IFCS, we demonstrate that: (1) the SVDD tool detects system failures accurately and provides validation inferences in a real-time manner; (2) the validity index effectively indicates poor fitting within regions characterized by sparse data and/or inadequate learning. The developed methods can be integrated with available online monitoring tools and further generalized to complete a promising validation framework for neural network based online adaptive systems

    Fault Diagnosis Of Sensor And Actuator Faults In Multi-Zone Hvac Systems

    Get PDF
    Globally, the buildings sector accounts for 30% of the energy consumption and more than 55% of the electricity demand. Specifically, the Heating, Ventilation, and Air Conditioning (HVAC) system is the most extensively operated component and it is responsible alone for 40% of the final building energy usage. HVAC systems are used to provide healthy and comfortable indoor conditions, and their main objective is to maintain the thermal comfort of occupants with minimum energy usage. HVAC systems include a considerable number of sensors, controlled actuators, and other components. They are at risk of malfunctioning or failure resulting in reduced efficiency, potential interference with the execution of supervision schemes, and equipment deterioration. Hence, Fault Diagnosis (FD) of HVAC systems is essential to improve their reliability, efficiency, and performance, and to provide preventive maintenance. In this thesis work, two neural network-based methods are proposed for sensor and actuator faults in a 3-zone HVAC system. For sensor faults, an online semi-supervised sensor data validation and fault diagnosis method using an Auto-Associative Neural Network (AANN) is developed. The method is based on the implementation of Nonlinear Principal Component Analysis (NPCA) using a Back-Propagation Neural Network (BPNN) and it demonstrates notable capability in sensor fault and inaccuracy correction, measurement noise reduction, missing sensor data replacement, and in both single and multiple sensor faults diagnosis. In addition, a novel on-line supervised multi-model approach for actuator fault diagnosis using Convolutional Neural Networks (CNNs) is developed for single actuator faults. It is based a data transformation in which the 1-dimensional data are configured into a 2-dimensional representation without the use of advanced signal processing techniques. The CNN-based actuator fault diagnosis approach demonstrates improved performance capability compared with the commonly used Machine Learning-based algorithms (i.e., Support Vector Machine and standard Neural Networks). The presented schemes are compared with other commonly used HVAC fault diagnosis methods for benchmarking and they are proven to be superior, effective, accurate, and reliable. The proposed approaches can be applied to large-scale buildings with additional zones

    Earthquake Early Warning System (EEWs) for the New Madrid Seismic Zone

    Get PDF
    Part 1: Research in the last decade on Earthquake Early Warning Systems (EEWSs) has undergone rapid development in terms of theoretical and methodological advances in real time data analysis, improved telemetry, and computer technology and is becoming a useful tool for practical real time seismic hazard mitigation. The main focus of this study is to undertake a feasibility study of an EEWS for the New Madrid Seismic Zone (NMSZ) from the standpoint of source location. Magnitude determination is addressed in a separate paper. The NMSZ covers a wide area with several heavily populated cities, vital infrastructures, and facilities located within a radius of less than 70 km from the epicenters of the 1811-1812 earthquakes. One of the challenges associated with the NMSZ is that while low to moderate levels of seismic activity are common, larger earthquakes are rare (i.e. there are no instrumentally recorded data for earthquakes with magnitudes greater than M5.5 in the NMSZ). We also recognize that it may not be realistic to provide early warning for all possible sources as is done on the west coast U.S. and we therefore focus on a specific source zone. We examine the stations within the NMSZ in order to answer the question What changes should be applied to the NMSZ network to make it suitable for earthquake early warning (EEW). We also explore needed changes to the Advanced National Seismic System (ANSS) Earthquake Monitoring System Real Time (AQMS RT) data acquisition system to make it useful for EEW. Our results show that EEW is feasible, though several technical challenges remain in incorporating its use with the present network.Part 2: Increasing vulnerability of metropolitan areas within stable continental regions (SCR), such as Memphis, TN and St. Louis, MO near the New Madrid Seismic Zone (NMSZ), to earthquakes and the very low probability level at which short term earthquake forecasting is possible make an earthquake early warning system (EEWS) a viable alternative for effective real-time risk reduction in these cities. In this study, we explore practical approaches to earthquake early warning (EEWS), and test the adaptability and potential of the real-time monitoring system in the NMSZ. We determine empirical relations based on amplitude and frequency magnitude proxies from the initial four seconds of the P-waveform records available from the Cooperative New Madrid Seismic Network (CNMSN) database for magnitude ????\u3e2.1. The amplitude-based proxies include low pass filtered peak displacement (Pd), peak velocity (Pv), and integral of the velocity squared (IV2), whereas the frequency-based proxies include predominant period (????????), characteristic period (????????), and log average period (????????????????). Very few studies have considered areas with lower magnitude events. With an active EEW system in the NMSZ, damage resulting from the catastrophic event, as witnessed in 1811-1812, may be mitigated in real-time

    Optimization for software release and crash

    Get PDF
    Software testing is a process to detect faults in the completeness and quality of developed computer software. Testing is a key process in assuring quality by identifying defects in software, and possibly fixing them, before it is delivered to end-users. A major decision to make during this software testing is, to determine whether to continue testing and eventually releasing the software, or when to stop the test and ‘crash’ it. Such a decision needs to be made to optimally balance the tradeoff between the cost of development and the reliability of the software. In this paper, a new optimal strategy is developed based on a conditional non-homogeneous Poisson process (Conditional-NHPP) on a continuous time horizon to determine when the optimal time is to release or crash the software

    Machining centre performance monitoring with calibrated artefact probing

    Get PDF
    Maintaining high levels of geometric accuracy in five-axis machining centres is of critical importance to many industries and applications. Numerous methods for error identification have been developed in both the academic and industrial fields; one commonly-applied technique is artefact probing, which can reveal inherent system errors at minimal cost and does not require high skill levels to perform. The primary focus of popular commercial solutions is on confirming machine capability to produce accurate workpieces, with the potential for short-term trend analysis and fault diagnosis through interpretation of the results by an experienced user. This paper considers expanding the artefact probing method into a performance monitoring system, benefitting both the onsite Maintenance Engineer and visiting specialist Engineer with accessibility of information and more effective means to form insight. A technique for constructing a data-driven tolerance threshold is introduced, describing the normal operating condition and helping protect against unwarranted settings induced by human error. A multifunctional graphical element is developed to present the data trends with tolerance threshold integration to maintain relevant performance context, and an automated event detector to highlight areas of interest or concern. The methods were developed on a simulated, demonstration dataset; then applied without modification to three case studies on data acquired from currently operating industrial machining centres to verify the methods. The data-driven tolerance threshold and event detector methods were shown to be effective at their respective tasks, and the merits of the multifunctional graphical display are presented and discussed
    corecore