517 research outputs found

    Sensor failure detection system

    Get PDF
    Advanced concepts for detecting, isolating, and accommodating sensor failures were studied to determine their applicability to the gas turbine control problem. Five concepts were formulated based upon such techniques as Kalman filters and a screening process led to the selection of one advanced concept for further evaluation. The selected advanced concept uses a Kalman filter to generate residuals, a weighted sum square residuals technique to detect soft failures, likelihood ratio testing of a bank of Kalman filters for isolation, and reconfiguring of the normal mode Kalman filter by eliminating the failed input to accommodate the failure. The advanced concept was compared to a baseline parameter synthesis technique. The advanced concept was shown to be a viable concept for detecting, isolating, and accommodating sensor failures for the gas turbine applications

    Ideas for Future GPS Timing Improvements

    Get PDF
    Having recently met stringent criteria for full operational capability (FOC) certification, the Global Positioning System (GPS) now has higher customer expectations than ever before. In order to maintain customer satisfaction, and the meet the even high customer demands of the future, the GPS Master Control Station (MCS) must play a critical role in the process of carefully refining the performance and integrity of the GPS constellation, particularly in the area of timing. This paper will present an operational perspective on several ideas for improving timing in GPS. These ideas include the desire for improving MCS - US Naval Observatory (USNO) data connectivity, an improved GPS-Coordinated Universal Time (UTC) prediction algorithm, a more robust Kalman Filter, and more features in the GPS reference time algorithm (the GPS composite clock), including frequency step resolution, a more explicit use of the basic time scale equation, and dynamic clock weighting. Current MCS software meets the exceptional challenge of managing an extremely complex constellation of 24 navigation satellites. The GPS community will, however, always seek to improve upon this performance and integrity

    Diagnosis of Fault Modes Masked by Control Loops with an Application to Autonomous Hovercraft Systems

    Get PDF
    This paper introduces a methodology for the design, testing and assessment of incipient failure detection techniques for failing components/systems of an autonomous vehicle masked or hidden by feedback control loops. It is recognized that the optimum operation of critical assets (aircraft, autonomous systems, etc.) may be compromised by feedback control loops by masking severe fault modes while compensating for typical disturbances. Detrimental consequences of such occurrences include the inability to detect expeditiously and accurately incipient failures, loss of control and inefficient operation of assets in the form of fuel overconsumption and adverse environmental impact. We pursue a systems engineering process to design, construct and test an autonomous hovercraft instrumented appropriately for improved autonomy. Hidden fault modes are detected with performance guarantees by invoking a Bayesian estimation approach called particle filtering. Simulation and experimental studies are employed to demonstrate the efficacy of the proposed methods

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Contributions on Automatic Recognition of Faces using Local Texture Features

    Full text link
    Uno de los temas más destacados del área de visión artifical se deriva del análisis facial automático. En particular, la detección precisa de caras humanas y el análisis biométrico de las mismas son problemas que han generado especial interés debido a la gran cantidad de aplicaciones que actualmente hacen uso de estos mecnismos. En esta Tesis Doctoral se analizan por separado los problemas relacionados con detección precisa de caras basada en la localización de los ojos y el reconomcimiento facial a partir de la extracción de características locales de textura. Los algoritmos desarrollados abordan el problema de la extracción de la identidad a partir de una imagen de cara ( en vista frontal o semi-frontal), para escenarios parcialmente controlados. El objetivo es desarrollar algoritmos robustos y que puedan incorpararse fácilmente a aplicaciones reales, tales como seguridad avanzada en banca o la definición de estrategias comerciales aplicadas al sector de retail. Respecto a la extracción de texturas locales, se ha realizado un análisis exhaustivo de los descriptores más extendidos; se ha puesto especial énfasis en el estudio de los Histogramas de Grandientes Orientados (HOG features). En representaciones normalizadas de la cara, estos descriptores ofrecen información discriminativa de los elementos faciales (ojos, boca, etc.), siendo robustas a variaciones en la iluminación y pequeños desplazamientos. Se han elegido diferentes algoritmos de clasificación para realizar la detección y el reconocimiento de caras, todos basados en una estrategia de sistemas supervisados. En particular, para la localización de ojos se ha utilizado clasificadores boosting y Máquinas de Soporte Vectorial (SVM) sobre descriptores HOG. En el caso de reconocimiento de caras, se ha desarrollado un nuevo algoritmo, HOG-EBGM (HOG sobre Elastic Bunch Graph Matching). Dada la imagen de una cara, el esquema seguido por este algoritmo se puede resumir en pocos pasos: en una primera etapa se extMonzó Ferrer, D. (2012). Contributions on Automatic Recognition of Faces using Local Texture Features [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16698Palanci

    Fault Diagnosis and Failure Prognostics of Lithium-ion Battery based on Least Squares Support Vector Machine and Memory Particle Filter Framework

    Get PDF
    123456A novel data driven approach is developed for fault diagnosis and remaining useful life (RUL) prognostics for lithium-ion batteries using Least Square Support Vector Machine (LS-SVM) and Memory-Particle Filter (M-PF). Unlike traditional data-driven models for capacity fault diagnosis and failure prognosis, which require multidimensional physical characteristics, the proposed algorithm uses only two variables: Energy Efficiency (EE), and Work Temperature. The aim of this novel framework is to improve the accuracy of incipient and abrupt faults diagnosis and failure prognosis. First, the LSSVM is used to generate residual signal based on capacity fade trends of the Li-ion batteries. Second, adaptive threshold model is developed based on several factors including input, output model error, disturbance, and drift parameter. The adaptive threshold is used to tackle the shortcoming of a fixed threshold. Third, the M-PF is proposed as the new method for failure prognostic to determine Remaining Useful Life (RUL). The M-PF is based on the assumption of the availability of real-time observation and historical data, where the historical failure data can be used instead of the physical failure model within the particle filter. The feasibility of the framework is validated using Li-ion battery prognostic data obtained from the National Aeronautic and Space Administration (NASA) Ames Prognostic Center of Excellence (PCoE). The experimental results show the following: (1) fewer data dimensions for the input data are required compared to traditional empirical models; (2) the proposed diagnostic approach provides an effective way of diagnosing Li-ion battery fault; (3) the proposed prognostic approach can predict the RUL of Li-ion batteries with small error, and has high prediction accuracy; and, (4) the proposed prognostic approach shows that historical failure data can be used instead of a physical failure model in the particle filter

    Real-time fault identification for developmental turbine engine testing

    Get PDF
    Hundreds of individual sensors produce an enormous amount of data during developmental turbine engine testing. The challenge is to ensure the validity of the data and to identify data and engine anomalies in a timely manner. An automated data validation, engine condition monitoring, and fault identification process that emulates typical engineering techniques has been developed for developmental engine testing.An automated data validation and fault identification approach employing enginecycle-matching principles is described. Engine cycle-matching is automated by using an adaptive nonlinear component-level computer model capable of simulating both steady state and transient engine operation. Automated steady-state, transient, and real-time model calibration processes are also described. The model enables automation of traditional data validation, engine condition monitoring, and fault identification procedures. A distributed parallel computing approach enables the entire process to operate in real-time.The result is a capability to detect data and engine anomalies in real-time during developmental engine testing. The approach is shown to be successful in detecting and identifying sensor anomalies as they occur and distinguishing these anomalies from variations in component and overall engine aerothermodynamic performance. The component-level model-based engine performance and fault identification technique of the present research is capable of: identifying measurement errors on the order of 0.5 percent (e.g., sensor bias, drift,level shift, noise, or poor response) in facility fuel flow, airflow, and thrust measurements; identifying measurement errors in engine aerothermodynamic measurements (rotorspeeds, gas path pressures and temperatures); identifying measurement errors in engine control sensors (e.g., leaking/biased pressure sensor, slowly responding pressure measurement) and variable geometry rigging (e.g., misset guide vanes or nozzle area) that would invalidate a test or series of tests; identifying abrupt faults (e.g., faults due to domestic object damage, foreign object damage, and control anomalies); identifying slow faults (e.g., component or overall engine degradation, and sensor drift). Specifically, the technique is capable of identifying small changes in compressor (or fan) performance on the order of 0.5 percent; and being easily extended to diagnose secondary failure modes and to verify any modeling assumptions that may arise for developmental engine tests (e.g., increase in turbine flow capacity, inaccurate measurement of facility bleed flows, horsepower extraction, etc.).The component-level model-based engine performance and fault identification method developed in the present work brings together features which individually and collectively advance the state-of-the-art. These features are separated into three categories: advancements to effectively quantify off-nominal behavior, advancements to provide a fault detection capability that is practical from the viewpoint of the analysis,implementation, tuning, and design, and advancements to provide a real-time fault detection capability that is reliable and efficient

    A sequential Bayesian approach to online power quality anomaly segmentation

    Get PDF
    Increased observability on power distribution networks can reveal signs of incipient faults which can develop into costly and unexpected plant failures. While low-cost sensing and communications infrastructure is facilitating this, it is also highlighting the complex nature of fault signals, a challenge which entails precisely extracting anomalous regions from continuous data streams before classifying the underlying fault signature. Doing this incorrectly will result in capture of uninformative data. Extraction processes can be confounded by operational noise on the network including harmonics produced by embedded generation. In this paper, an online model is proposed. Our Bayesian Changepoint Power Quality anomaly Segmentation allows automated segmentation of anomalies from continuous current waveforms, irrespective of noise. Demonstration of the effectiveness of the proposed technique is carried out with operational field data as well as a challenging simulated network, highlighting the ability to accommodate noise from typical network penetration levels of power electronic devices

    HYTESS 2: A Hypothetical Turbofan Engine Simplified Simulation with multivariable control and sensor analytical redundancy

    Get PDF
    A hypothetical turbofan engine simplified simulation with a multivariable control and sensor failure detection, isolation, and accommodation logic (HYTESS II) is presented. The digital program, written in FORTRAN, is self-contained, efficient, realistic and easily used. Simulated engine dynamics were developed from linearized operating point models. However, essential nonlinear effects are retained. The simulation is representative of the hypothetical, low bypass ratio turbofan engine with an advanced control and failure detection logic. Included is a description of the engine dynamics, the control algorithm, and the sensor failure detection logic. Details of the simulation including block diagrams, variable descriptions, common block definitions, subroutine descriptions, and input requirements are given. Example simulation results are also presented
    corecore