7,244 research outputs found

    Prognostic Reasoner based adaptive power management system for a more electric aircraft

    Get PDF
    This research work presents a novel approach that addresses the concept of an adaptive power management system design and development framed in the Prognostics and Health Monitoring(PHM) perspective of an Electrical power Generation and distribution system(EPGS).PHM algorithms were developed to detect the health status of EPGS components which can accurately predict the failures and also able to calculate the Remaining Useful Life(RUL), and in many cases reconfigure for the identified system and subsystem faults. By introducing these approach on Electrical power Management system controller, we are gaining a few minutes lead time to failures with an accurate prediction horizon on critical systems and subsystems components that may introduce catastrophic secondary damages including loss of aircraft. The warning time on critical components and related system reconfiguration must permits safe return to landing as the minimum criteria and would enhance safety. A distributed architecture has been developed for the dynamic power management for electrical distribution system by which all the electrically supplied loads can be effectively controlled.A hybrid mathematical model based on the Direct-Quadrature (d-q) axis transformation of the generator have been formulated for studying various structural and parametric faults. The different failure modes were generated by injecting faults into the electrical power system using a fault injection mechanism. The data captured during these studies have been recorded to form a “Failure Database” for electrical system. A hardware in loop experimental study were carried out to validate the power management algorithm with FPGA-DSP controller. In order to meet the reliability requirements a Tri-redundant electrical power management system based on DSP and FPGA has been develope

    A survey of outlier detection methodologies

    Get PDF
    Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review

    A Methodology for the Diagnostic of Aircraft Engine Based on Indicators Aggregation

    Full text link
    Aircraft engine manufacturers collect large amount of engine related data during flights. These data are used to detect anomalies in the engines in order to help companies optimize their maintenance costs. This article introduces and studies a generic methodology that allows one to build automatic early signs of anomaly detection in a way that is understandable by human operators who make the final maintenance decision. The main idea of the method is to generate a very large number of binary indicators based on parametric anomaly scores designed by experts, complemented by simple aggregations of those scores. The best indicators are selected via a classical forward scheme, leading to a much reduced number of indicators that are tuned to a data set. We illustrate the interest of the method on simulated data which contain realistic early signs of anomalies.Comment: Proceedings of the 14th Industrial Conference, ICDM 2014, St. Petersburg : Russian Federation (2014

    Case-based reasoning combined with statistics for diagnostics and prognosis

    Get PDF
    Many approaches used for diagnostics today are based on a precise model. This excludes diagnostics of many complex types of machinery that cannot be modelled and simulated easily or without great effort. Our aim is to show that by including human experience it is possible to diagnose complex machinery when there is no or limited models or simulations available. This also enables diagnostics in a dynamic application where conditions change and new cases are often added. In fact every new solved case increases the diagnostic power of the system. We present a number of successful projects where we have used feature extraction together with case-based reasoning to diagnose faults in industrial robots, welding, cutting machinery and we also present our latest project for diagnosing transmissions by combining Case-Based Reasoning (CBR) with statistics. We view the fault diagnosis process as three consecutive steps. In the first step, sensor fault signals from machines and/or input from human operators are collected. Then, the second step consists of extracting relevant fault features. In the final diagnosis/prognosis step, status and faults are identified and classified. We view prognosis as a special case of diagnosis where the prognosis module predicts a stream of future features

    Self-tuning routine alarm analysis of vibration signals in steam turbine generators

    Get PDF
    This paper presents a self-tuning framework for knowledge-based diagnosis of routine alarms in steam turbine generators. The techniques provide a novel basis for initialising and updating time series feature extraction parameters used in the automated decision support of vibration events due to operational transients. The data-driven nature of the algorithms allows for machine specific characteristics of individual turbines to be learned and reasoned about. The paper provides a case study illustrating the routine alarm paradigm and the applicability of systems using such techniques

    Growth, Income and Regulation: a Non-Linear Approach

    Get PDF
    This paper analyzes the effect on GDP growth of income (GDP per capita) and economic regulation. A simple theoretical framework presents two opposing views. We analyze the empirical relation using a non-linear dynamic panel data model with fixed effects. The result shows that the effect of regulation on growth depends on income. For low-income countries, there is little effect of changing regulation. For highly regulated middle-income countries, deregulation can increase growth. For high-income countries, deregulation leads to higher growth. Holding regulation constant, there is catch-up growth with a maximum at an intermediate income level.catch-up growth; economic freedom; fixed effects; GMM; specification tests

    Interpretable Aircraft Engine Diagnostic via Expert Indicator Aggregation

    Full text link
    Detecting early signs of failures (anomalies) in complex systems is one of the main goal of preventive maintenance. It allows in particular to avoid actual failures by (re)scheduling maintenance operations in a way that optimizes maintenance costs. Aircraft engine health monitoring is one representative example of a field in which anomaly detection is crucial. Manufacturers collect large amount of engine related data during flights which are used, among other applications, to detect anomalies. This article introduces and studies a generic methodology that allows one to build automatic early signs of anomaly detection in a way that builds upon human expertise and that remains understandable by human operators who make the final maintenance decision. The main idea of the method is to generate a very large number of binary indicators based on parametric anomaly scores designed by experts, complemented by simple aggregations of those scores. A feature selection method is used to keep only the most discriminant indicators which are used as inputs of a Naive Bayes classifier. This give an interpretable classifier based on interpretable anomaly detectors whose parameters have been optimized indirectly by the selection process. The proposed methodology is evaluated on simulated data designed to reproduce some of the anomaly types observed in real world engines.Comment: arXiv admin note: substantial text overlap with arXiv:1408.6214, arXiv:1409.4747, arXiv:1407.088

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Machine learning techniques for fault isolation and sensor placement

    Get PDF
    Fault isolation and sensor placement are vital for monitoring and diagnosis. A sensor conveys information about a system's state that guides troubleshooting if problems arise. We are using machine learning methods to uncover behavioral patterns over snapshots of system simulations that will aid fault isolation and sensor placement, with an eye towards minimality, fault coverage, and noise tolerance
    corecore