16 research outputs found
Statistical Degradation Models for Electronics
With increasing presence of electronics in modern systems and in every-day products, their reliability is inextricably dependent on that of their electronics. We develop reliability models for failure-time prediction under small failure-time samples and information on individual degradation history. The development of the model extends the work of Whitmore et al. 1998, to incorporate two new data-structures common to reliability testing. Reliability models traditionally use lifetime information to evaluate the reliability of a device or system. To analyze small failure-time samples within dynamic environments where failure mechanisms are unknown, there is a need for models that make use of auxiliary reliability information. In this thesis we present models suitable for reliability data, where degradation variables are latent and can be tracked by related observable variables we call markers.
We provide an engineering justification for our model and develop parametric and predictive inference equations for a data-structure that includes terminal observations of the degradation variable and longitudinal marker measurements. We compare maximum likelihood estimation and prediction results obtained by Whitmore et. al. 1998 and show improvement in inference under small sample sizes. We introduce modeling of variable failure thresholds within the framework of bivariate degradation models and discuss ways of incorporating covariates.
In the second part of the thesis we investigate anomaly detection through a Bayesian support vector machine and discuss its place in degradation modeling. We compute posterior class probabilities for time-indexed covariate observations, which we use as measures of degradation. Lastly, we present a multistate model used to model a recurrent event process and failure-times. We compute the expected time to failure using counting process theory and investigate the effect of the event process on the expected failure-time estimates
Self-tuning routine alarm analysis of vibration signals in steam turbine generators
This paper presents a self-tuning framework for knowledge-based diagnosis of routine alarms in steam turbine generators. The techniques provide a novel basis for initialising and updating time series feature extraction parameters used in the automated decision support of vibration events due to operational transients. The data-driven nature of the algorithms allows for machine specific characteristics of individual turbines to be learned and reasoned about. The paper provides a case study illustrating the routine alarm paradigm and the applicability of systems using such techniques
New hybrid ensemble method for anomaly detection in data science
Anomaly detection is a significant research area in data science. Anomaly detection is used to find unusual points or uncommon events in data streams. It is gaining popularity not only in the business world but also in different of other fields, such as cyber security, fraud detection for financial systems, and healthcare. Detecting anomalies could be useful to find new knowledge in the data. This study aims to build an effective model to protect the data from these anomalies. We propose a new hyper ensemble machine learning method that combines the predictions from two methodologies the outcomes of isolation forest-k-means and random forest using a voting majority. Several available datasets, including KDD Cup-99, Credit Card, Wisconsin Prognosis Breast Cancer (WPBC), Forest Cover, and Pima, were used to evaluate the proposed method. The experimental results exhibit that our proposed model gives the highest realization in terms of receiver operating characteristic performance, accuracy, precision, and recall. Our approach is more efficient in detecting anomalies than other approaches. The highest accuracy rate achieved is 99.9%, compared to accuracy without a voting method, which achieves 97%
Prognostics and health monitoring of high power LED
Prognostics is seen as a key component of health usage monitoring systems, where prognostics algorithms can both detect anomalies in the behaviour/performance of a micro-device/system, and predict its remaining useful life when subjected to monitored operational and environmental conditions. Light Emitting Diodes (LEDs) are optoelectronic micro-devices that are now replacing traditional incadescent and fluorescent lighting, as they have many advantages including higher reliability, greater energy efficiency, long life time and faster switching speed. For some LED applications there is a requirement to monitor the health of LED lighting systems and predict when failure is likely to occur. This is very important in the case of safety critical and emergency applications. This paper provides both experimental and theoretical results that demonstrate the use of prognostics and health monitoring techniques for high power LEDs subjected to harsh operating conditions
Prognostic and health management for engineering systems: a review of the data-driven approach and algorithms
Prognostics and health management (PHM) has become an important component of many engineering systems and products, where algorithms are used to detect anomalies, diagnose faults and predict remaining useful lifetime (RUL). PHM can provide many advantages to users and maintainers. Although primary goals are to ensure the safety, provide state of the health and estimate RUL of the components and systems, there are also financial benefits such as operational and maintenance cost reductions and extended lifetime. This study aims at reviewing the current status of algorithms and methods used to underpin different existing PHM approaches. The focus is on providing a structured and comprehensive classification of the existing state-of-the-art PHM approaches, data-driven approaches and algorithms
Prognostics and Health Management of Industrial Equipment
ISBN13: 9781466620957Prognostics and health management (PHM) is a field of research and application which aims at making use of past, present and future information on the environmental, operational and usage conditions of an equipment in order to detect its degradation, diagnose its faults, predict and proactively manage its failures. The present paper reviews the state of knowledge on the methods for PHM, placing these in context with the different information and data which may be available for performing the task and identifying the current challenges and open issues which must be addressed for achieving reliable deployment in practice. The focus is predominantly on the prognostic part of PHM, which addresses the prediction of equipment failure occurrence and associated residual useful life (RUL)
Sensor Systems for Prognostics and Health Management
Prognostics and health management (PHM) is an enabling discipline consisting of technologies and methods to assess the reliability of a product in its actual life cycle conditions to determine the advent of failure and mitigate system risk. Sensor systems are needed for PHM to monitor environmental, operational, and performance-related characteristics. The gathered data can be analyzed to assess product health and predict remaining life. In this paper, the considerations for sensor system selection for PHM applications, including the parameters to be measured, the performance needs, the electrical and physical attributes, reliability, and cost of the sensor system, are discussed. The state-of-the-art sensor systems for PHM and the emerging trends in technologies of sensor systems for PHM are presented
Recommended from our members
Deep Quantile Regression for Unsupervised Anomaly Detection in Time-Series
YesTime-series anomaly detection receives increasing research interest given the growing number of data-rich application domains. Recent additions to anomaly detection methods in research literature include deep neural networks (DNNs: e.g., RNN, CNN, and Autoencoder). The nature and performance of these algorithms in sequence analysis enable them to learn hierarchical discriminative features and time-series temporal nature. However, their performance is affected by usually assuming a Gaussian distribution on the prediction error, which is either ranked, or threshold to label data instances as anomalous or not. An exact parametric distribution is often not directly relevant in many applications though. This will potentially produce faulty decisions from false anomaly predictions due to high variations in data interpretation. The expectations are to produce outputs characterized by a level of confidence. Thus, implementations need the Prediction Interval (PI) that quantify the level of uncertainty associated with the DNN point forecasts, which helps in making better-informed decision and mitigates against false anomaly alerts. An effort has been made in reducing false anomaly alerts through the use of quantile regression for identification of anomalies, but it is limited to the use of quantile interval to identify uncertainties in the data. In this paper, an improve time-series anomaly detection method called deep quantile regression anomaly detection (DQR-AD) is proposed. The proposed method go further to used quantile interval (QI) as anomaly score and compare it with threshold to identify anomalous points in time-series data. The tests run of the proposed method on publicly available anomaly benchmark datasets demonstrate its effective performance over other methods that assumed Gaussian distribution on the prediction or reconstruction cost for detection of anomalies. This shows that our method is potentially less sensitive to data distribution than existing approaches.Petroleum Technology Development Fund (PTDF) PhD Scholarship, Nigeria (Award Number: PTDF/ ED/PHD/IAT/884/16