43 research outputs found

    INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS

    Get PDF
    Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system

    In situ analysis for intelligent control

    Get PDF
    We report a pilot study on in situ analysis of backscatter data for intelligent control of a scientific instrument on an Autonomous Underwater Vehicle (AUV) carried out at the Monterey Bay Aquarium Research Institute (MBARI). The objective of the study is to investigate techniques which use machine intelligence to enable event-response scenarios. Specifically we analyse a set of techniques for automated sample acquisition in the water-column using an electro-mechanical "Gulper", designed at MBARI. This is a syringe-like sampling device, carried onboard an AUV. The techniques we use in this study are clustering algorithms, intended to identify the important distinguishing characteristics of bodies of points within a data sample. We demonstrate that the complementary features of two clustering approaches can offer robust identification of interesting features in the water-column, which, in turn, can support automatic event-response control in the use of the Gulper

    Explaining Aviation Safety Incidents Using Deep Temporal Multiple Instance Learning

    Full text link
    Although aviation accidents are rare, safety incidents occur more frequently and require a careful analysis to detect and mitigate risks in a timely manner. Analyzing safety incidents using operational data and producing event-based explanations is invaluable to airline companies as well as to governing organizations such as the Federal Aviation Administration (FAA) in the United States. However, this task is challenging because of the complexity involved in mining multi-dimensional heterogeneous time series data, the lack of time-step-wise annotation of events in a flight, and the lack of scalable tools to perform analysis over a large number of events. In this work, we propose a precursor mining algorithm that identifies events in the multidimensional time series that are correlated with the safety incident. Precursors are valuable to systems health and safety monitoring and in explaining and forecasting safety incidents. Current methods suffer from poor scalability to high dimensional time series data and are inefficient in capturing temporal behavior. We propose an approach by combining multiple-instance learning (MIL) and deep recurrent neural networks (DRNN) to take advantage of MIL's ability to learn using weakly supervised data and DRNN's ability to model temporal behavior. We describe the algorithm, the data, the intuition behind taking a MIL approach, and a comparative analysis of the proposed algorithm with baseline models. We also discuss the application to a real-world aviation safety problem using data from a commercial airline company and discuss the model's abilities and shortcomings, with some final remarks about possible deployment directions

    Trajectory Clustering and an Application to Airspace Monitoring

    Get PDF
    This paper presents a framework aimed at monitoring the behavior of aircraft in a given airspace. Nominal trajectories are determined and learned using data driven methods. Standard procedures are used by air traffic controllers (ATC) to guide aircraft, ensure the safety of the airspace, and to maximize the runway occupancy. Even though standard procedures are used by ATC, the control of the aircraft remains with the pilots, leading to a large variability in the flight patterns observed. Two methods to identify typical operations and their variability from recorded radar tracks are presented. This knowledge base is then used to monitor the conformance of current operations against operations previously identified as standard. A tool called AirTrajectoryMiner is presented, aiming at monitoring the instantaneous health of the airspace, in real time. The airspace is "healthy" when all aircraft are flying according to the nominal procedures. A measure of complexity is introduced, measuring the conformance of current flight to nominal flight patterns. When an aircraft does not conform, the complexity increases as more attention from ATC is required to ensure a safe separation between aircraft.Comment: 15 pages, 20 figure

    Inductive Monitoring Systems: A CubeSat Ground-Based Prototype

    Get PDF
    Inductive Monitoring Systems (IMS) are the newest form of health monitoring available to the aerospace industry. IMS is a program that builds a knowledge base of nominal state vectors from a nominal data set using data mining techniques. The nominal knowledge base is then used to monitor new data vectors for off-nominal conditions within the system. IMS is designed to replace the current health monitoring process, referred to as model-based reasoning, by automating the process of classifying healthy states and anomaly detection. An IMS prototype was designed and implemented in MATLAB. A verification analysis then determined if the IMS program could connect to a CubeSat in a testing environment and could successfully monitor all sensors on board the CubeSat before in-flight use. This program consisted of two main algorithms, one for learning and one for monitoring. The learning algorithm creates the nominal knowledge bases and was developed using three data mining algorithms: the gap statistic method to find the optimal number of clusters, the K-means++ algorithm to initialize the centroids, and the K-means algorithm to partition the data vectors into the appropriate clusters. The monitoring algorithm employed the nearest neighbor searching algorithm to find the closest cluster and compared the new data vector with the closest cluster. The clusters found were used to establish the knowledge bases. Any data vector within the boundaries of the clusters was deemed nominal and any data vector outside the boundaries was deemed off-nominal. The learning and monitoring algorithms were then adapted to handle the data format used on a CubeSat and to monitor the data in real time. The developed algorithms were then integrated into a MATLAB GUI for ease of use. The learning and monitoring algorithms were verified with a 2-dimensional data set to ensure that they performed as expected. The final IMS CubeSat prototype was verified using 56-dimensional emulated data packages. Both verification methods confirmed that the IMS ground- based prototype was able to successfully identify all off-nominal conditions induced into the system

    Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    Get PDF
    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009

    A GAN Approach for Anomaly Detection in Spacecraft Telemetries

    Get PDF
    In spacecraft health management a large number of time series is acquired and used for on-board units surveillance and for historical data analysis. The early detection of abnormal behaviors in telemetry data can prevent failures in the spacecraft equipment. In this paper we present an advanced monitoring system that was carried out in partnership with Thales Alenia Space Italia S.p.A, a leading industry in the field of spacecraft manufacturing. In particular, we developed an anomaly detection algorithm based on Generative Adversarial Networks, that thanks to their ability to model arbitrary distributions in high dimensional spaces, allow to capture complex anomalies avoiding the burden of hand crafted feature extraction. We applied this method to detect anomalies in telemetry data collected from a simulator of a Low Earth Orbit satellite. One of the strengths of the proposed approach is that it does not require any previous knowledge on the signal. This is particular useful in the context of anomaly detection where we do not have a model of the anomaly. Hence the only assumption we made is that an anomaly is a pattern that lives in a lower probability region of the data space

    General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations

    Get PDF
    Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring

    Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine

    Get PDF
    The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location

    Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    Get PDF
    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics
    corecore