72 research outputs found

    Data-driven prognostic method based on Bayesian approaches for direct remaining useful life prediction.

    No full text
    International audienceReliability of prognostics and health management systems (PHM) relies upon accurate understanding of critical components' degradation process to predict the remaining useful life (RUL). Traditionally, degradation process is represented in the form of data or expert models. Such models require extensive experimentation and verification that are not always feasible. Another approach that builds up knowledge about the system degradation over the time from component sensor data is known as data driven. Data driven models, however, require that sufficient historical data have been collected. In this paper, a two phases data driven method for RUL prediction is presented. In the offline phase, the proposed method builds on finding variables that contain information about the degradation behavior using unsupervised variable selection method. Different health indicators (HI) are constructed from the selected variables, which represent the degradation as a function of time, and saved in the offline database as reference models. In the online phase, the method finds the most similar offline health indicator, to the online health indicator, using k-nearest neighbors (k-NN) classifier to use it as a RUL predictor. The method finally estimates the degradation state using discrete Bayesian filter. The method is verified using battery and turbofan engine degradation simulation data acquired from NASA data repository. The results show the effectiveness of the method in predicting the RUL for both applications

    Failure Diagnosis and Prognosis of Safety Critical Systems: Applications in Aerospace Industries

    Get PDF
    Many safety-critical systems such as aircraft, space crafts, and large power plants are required to operate in a reliable and efficient working condition without any performance degradation. As a result, fault diagnosis and prognosis (FDP) is a research topic of great interest in these systems. FDP systems attempt to use historical and current data of a system, which are collected from various measurements to detect faults, diagnose the types of possible failures, predict and manage failures in advance. This thesis deals with FDP of safety-critical systems. For this purpose, two critical systems including a multifunctional spoiler (MFS) and hydro-control value system are considered, and some challenging issues from the FDP are investigated. This research work consists of three general directions, i.e., monitoring, failure diagnosis, and prognosis. The proposed FDP methods are based on data-driven and model-based approaches. The main aim of the data-driven methods is to utilize measurement data from the system and forecast the remaining useful life (RUL) of the faulty components accurately and efficiently. In this regard, two dierent methods are developed. A modular FDP method based on a divide and conquer strategy is presented for the MFS system. The modular structure contains three components:1) fault diagnosis unit, 2) failure parameter estimation unit and 3) RUL unit. The fault diagnosis unit identifies types of faults based on an integration of neural network (NN) method and discrete wavelet transform (DWT) technique. Failure parameter estimation unit observes the failure parameter via a distributed neural network. Afterward, the RUL of the system is predicted by an adaptive Bayesian method. In another work, an innovative data-driven FDP method is developed for hydro-control valve systems. The idea is to use redundancy in multi-sensor data information and enhance the performance of the FDP system. Therefore, a combination of a feature selection method and support vector machine (SVM) method is applied to select proper sensors for monitoring of the hydro-valve system and isolate types of fault. Then, adaptive neuro-fuzzy inference systems (ANFIS) method is used to estimate the failure path. Similarly, an online Bayesian algorithm is implemented for forecasting RUL. Model-based methods employ high-delity physics-based model of a system for prognosis task. In this thesis, a novel model-based approach based on an integrated extended Kalman lter (EKF) and Bayesian method is introduced for the MFS system. To monitor the MFS system, a residual estimation method using EKF is performed to capture the progress of the failure. Later, a transformation is utilized to obtain a new measure to estimate the degradation path (DP). Moreover, the recursive Bayesian algorithm is invoked to predict the RUL. Finally, relative accuracy (RA) measure is utilized to assess the performance of the proposed methods

    Performance modelling with adaptive hidden Markov models and discriminatory processor sharing queues

    Get PDF
    In modern computer systems, workload varies at different times and locations. It is important to model the performance of such systems via workload models that are both representative and efficient. For example, model-generated workloads represent realistic system behaviour, especially during peak times, when it is crucial to predict and address performance bottlenecks. In this thesis, we model performance, namely throughput and delay, using adaptive models and discrete queues. Hidden Markov models (HMMs) parsimoniously capture the correlation and burstiness of workloads with spatiotemporal characteristics. By adapting the batch training of standard HMMs to incremental learning, online HMMs act as benchmarks on workloads obtained from live systems (i.e. storage systems and financial markets) and reduce time complexity of the Baum-Welch algorithm. Similarly, by extending HMM capabilities to train on multiple traces simultaneously it follows that workloads of different types are modelled in parallel by a multi-input HMM. Typically, the HMM-generated traces verify the throughput and burstiness of the real data. Applications of adaptive HMMs include predicting user behaviour in social networks and performance-energy measurements in smartphone applications. Equally important is measuring system delay through response times. For example, workloads such as Internet traffic arriving at routers are affected by queueing delays. To meet quality of service needs, queueing delays must be minimised and, hence, it is important to model and predict such queueing delays in an efficient and cost-effective manner. Therefore, we propose a class of discrete, processor-sharing queues for approximating queueing delay as response time distributions, which represent service level agreements at specific spatiotemporal levels. We adapt discrete queues to model job arrivals with distributions given by a Markov-modulated Poisson process (MMPP) and served under discriminatory processor-sharing scheduling. Further, we propose a dynamic strategy of service allocation to minimise delays in UDP traffic flows whilst maximising a utility function.Open Acces

    An Application of Gaussian Processes for Analysis in Chemical Engineering

    Get PDF
    Industry 4.0 is transforming the chemical engineering industry. With it, machine learning (ML) is exploding, and a large variety of complex algorithms are being developed. One particularly popular ML algorithm is the Gaussian Process (GP), which is a full probabilistic, non-parametric, Bayesian model. As a blackbox function, the GP encapsulates an engineering system in a cheaper framework known as a surrogate model. GP surrogate models can be confidently used to investigate chemical engineering scenarios. The research conducted in this thesis explores the application of GPs to case studies in chemical engineering. In many chemical engineering scenarios, it is critical to understand how input uncertainty impacts an important output. A sensitivity analysis does this by characterising the input-output relationship of a system. ML encapsulates a large system into a cheaper framework, enabling a Global Sensitivity Analysis (GSA) to be conducted. The GSA considers the model behaviour over the entire range of inputs and outputs. The Sobol’ indices are recognised as the benchmark GSA method. To achieve a satisfactory precision level, the variance-based decomposition method requires a significant computational burden. Thus, one exciting application of GPs is to reduce the number of model evaluations required and efficiently calculate the Sobol’ indices for large GSA studies. The first three case studies used GPs to perform GSA’s in chemical engineering. The first examined the effects of thermal runaway (TR) abuse on lithium-ion batteries. To calculate time-dependent Sobol’ indices, this study created an accurate surrogate model by training individual GPs at each time step. This work used GPs to help develop a complex chemical engineering simulation model. The second GSA calibrated a high-shear wet granulation model using experimental data. This work developed a methodology, linking two GSA studies, to substantially reduce the experimental effort required for model-driven design and scale-up of model processes. This enabled the creation of a targeted experimental design that reduced the experimental effort by 42%. The third case study developed a novel reduced order model (ROM) for predicting gaseous uptake of metal-organic framework (MOF) structures using GPs. Based on previous GSA research, the Active Subspaces were located using the Sobol’ indices of each pore property for the MOF structures. The novel ROM was shown to be a viable tool to identify the top-performing MOF structures showing its potential to be a universal MOF exploration model. The final two case studies applied GPs as a tool in novel techniques that combined ML algorithms. First, GPs are seldom used for mid-term electricity price forecasting because of their inaccuracy when extrapolating data. This research aimed to improve GP prediction accuracy by developing a GP-based clustering hybridisation method. The proposed hybridisation method outperformed other GP-based forecasting techniques, as demonstrated by the Diebold-Mariano hypothesis test. In the final case study, ML models were used to develop an effective maintenance strategy. The work compares ML algorithms for predictive maintenance and maintenance time estimation on a cyber-physical process plant to find the best for the maintenance workflow. The best algorithms for this case study were the Quadratic Discriminant Analysis model and the GP. The overall plant maintenance costs were found to be reduced by combining predictive maintenance with maintenance time estimation into a workflow. This research could help improve maintenance tasks in Industry 4.0. This thesis focused on using GPs to enhance collaborative efforts and demonstrate the enormous impact that ML can have in both research and industry. By proposing several novel ideas and applications, it is shown that GPs can be an efficient and effective tool for the analysis of chemical engineering systems
    • …
    corecore