67,864 research outputs found

    Develop Advanced Nonlinear Signal Analysis Topographical Mapping System

    Get PDF
    During the development of the SSME, a hierarchy of advanced signal analysis techniques for mechanical signature analysis has been developed by NASA and AI Signal Research Inc. (ASRI) to improve the safety and reliability for Space Shuttle operations. These techniques can process and identify intelligent information hidden in a measured signal which is often unidentifiable using conventional signal analysis methods. Currently, due to the highly interactive processing requirements and the volume of dynamic data involved, detailed diagnostic analysis is being performed manually which requires immense man-hours with extensive human interface. To overcome this manual process, NASA implemented this program to develop an Advanced nonlinear signal Analysis Topographical Mapping System (ATMS) to provide automatic/unsupervised engine diagnostic capabilities. The ATMS will utilize a rule-based Clips expert system to supervise a hierarchy of diagnostic signature analysis techniques in the Advanced Signal Analysis Library (ASAL). ASAL will perform automatic signal processing, archiving, and anomaly detection/identification tasks in order to provide an intelligent and fully automated engine diagnostic capability. The ATMS has been successfully developed under this contract. In summary, the program objectives to design, develop, test and conduct performance evaluation for an automated engine diagnostic system have been successfully achieved. Software implementation of the entire ATMS system on MSFC's OISPS computer has been completed. The significance of the ATMS developed under this program is attributed to the fully automated coherence analysis capability for anomaly detection and identification which can greatly enhance the power and reliability of engine diagnostic evaluation. The results have demonstrated that ATMS can significantly save time and man-hours in performing engine test/flight data analysis and performance evaluation of large volumes of dynamic test data

    Anomaly detection and fault diagnostics for underwater gliders using deep learning

    Get PDF
    Underwater Gliders (UGs) (Fig. 1) are a type of Autonomous Underwater Vehicle (AUV) that are being used extensively for long-term observation of key physical oceanographic parameters [1]. They operate remotely at a low surge speed of approximately 0.3ms−1, with deployments of several months [2]. However, developing Near Real-Time (NRT) anomaly detection and fault diagnostics systems for such vehicles remains challenging as decimated sensor data can only be transmitted off-board periodically during operations when the UG is on the surface. As part of an ongoing collaboration, the authors have previously developed anomaly detection systems for UGs via different approaches. In [3], a simple but effective system was developed to detect the wing loss using the roll angle. In [4], system identification techniques were employed to detect changes in model parameters which further successfully deduced simulated and natural marine growth. Anderlini, et al. [5] further conducted a field test to validate a marine growth detection system for UGs using ensembles of regression trees. In [6], the use of a range of deep learning techniques was investigated to achieve over-the-horizon anomaly detection for UGs. In [7], an anomaly detection system based on an improved Bi-directional Generative Adversarial Network (BiGAN) was prototyped to enable generic anomaly detection for different types of anomalies. For UGs operated over the horizon, some faults can only be revealed when the faulty UGs are recovered. Also, it is not clear when the faults developed. Some undetected faults can lead to critical failures and the loss of vehicle and/or data cargo. Therefore, it is essential to understand the actual causes of high anomaly scores during remote monitoring to allow operators to take appropriate mitigations to minimise subsequent risks and maximise the successful delivery of the remainder of the deployment. This paper further compares the results acquired in [7] with other baseline approaches. In addition, a new supervised fault diagnostics method for UGs is proposed. The BiGAN-based anomaly detection system is applied to estimate when the faults are developed, such that the training dataset for the supervised fault diagnostics model can be accurately annotated. The results suggest that the BiGAN-based anomaly detection system has successfully detected different types of anomalies, in good agreement with model-based and rule-based approaches. The supervised fault diagnostics system has achieved high fault diagnostics accuracy on the available test dataset

    Fast, Robust, and Versatile Event Detection through HMM Belief State Gradient Measures

    Full text link
    Event detection is a critical feature in data-driven systems as it assists with the identification of nominal and anomalous behavior. Event detection is increasingly relevant in robotics as robots operate with greater autonomy in increasingly unstructured environments. In this work, we present an accurate, robust, fast, and versatile measure for skill and anomaly identification. A theoretical proof establishes the link between the derivative of the log-likelihood of the HMM filtered belief state and the latest emission probabilities. The key insight is the inverse relationship in which gradient analysis is used for skill and anomaly identification. Our measure showed better performance across all metrics than related state-of-the art works. The result is broadly applicable to domains that use HMMs for event detection.Comment: 8 pages, 7 figures, double col, ieee conference forma

    On-line transformer condition monitoring through diagnostics and anomaly detection

    Get PDF
    This paper describes the end-to-end components of an on-line system for diagnostics and anomaly detection. The system provides condition monitoring capabilities for two in- service transmission transformers in the UK. These transformers are nearing the end of their design life, and it is hoped that intensive monitoring will enable them to stay in service for longer. The paper discusses the requirements on a system for interpreting data from the sensors installed on site, as well as describing the operation of specific diagnostic and anomaly detection techniques employed. The system is deployed on a substation computer, collecting and interpreting site data on-line

    Artificial intelligence driven anomaly detection for big data systems

    Get PDF
    The main goal of this thesis is to contribute to the research on automated performance anomaly detection and interference prediction by implementing Artificial Intelligence (AI) solutions for complex distributed systems, especially for Big Data platforms within cloud computing environments. The late detection and manual resolutions of performance anomalies and system interference in Big Data systems may lead to performance violations and financial penalties. Motivated by this issue, we propose AI-based methodologies for anomaly detection and interference prediction tailored to Big Data and containerized batch platforms to better analyze system performance and effectively utilize computing resources within cloud environments. Therefore, new precise and efficient performance management methods are the key to handling performance anomalies and interference impacts to improve the efficiency of data center resources. The first part of this thesis contributes to performance anomaly detection for in-memory Big Data platforms. We examine the performance of Big Data platforms and justify our choice of selecting the in-memory Apache Spark platform. An artificial neural network-driven methodology is proposed to detect and classify performance anomalies for batch workloads based on the RDD characteristics and operating system monitoring metrics. Our method is evaluated against other popular machine learning algorithms (ML), as well as against four different monitoring datasets. The results prove that our proposed method outperforms other ML methods, typically achieving 98–99% F-scores. Moreover, we prove that a random start instant, a random duration, and overlapped anomalies do not significantly impact the performance of our proposed methodology. The second contribution addresses the challenge of anomaly identification within an in-memory streaming Big Data platform by investigating agile hybrid learning techniques. We develop TRACK (neural neTwoRk Anomaly deteCtion in sparK) and TRACK-Plus, two methods to efficiently train a class of machine learning models for performance anomaly detection using a fixed number of experiments. Our model revolves around using artificial neural networks with Bayesian Optimization (BO) to find the optimal training dataset size and configuration parameters to efficiently train the anomaly detection model to achieve high accuracy. The objective is to accelerate the search process for finding the size of the training dataset, optimizing neural network configurations, and improving the performance of anomaly classification. A validation based on several datasets from a real Apache Spark Streaming system is performed, demonstrating that the proposed methodology can efficiently identify performance anomalies, near-optimal configuration parameters, and a near-optimal training dataset size while reducing the number of experiments up to 75% compared with naïve anomaly detection training. The last contribution overcomes the challenges of predicting completion time of containerized batch jobs and proactively avoiding performance interference by introducing an automated prediction solution to estimate interference among colocated batch jobs within the same computing environment. An AI-driven model is implemented to predict the interference among batch jobs before it occurs within system. Our interference detection model can alleviate and estimate the task slowdown affected by the interference. This model assists the system operators in making an accurate decision to optimize job placement. Our model is agnostic to the business logic internal to each job. Instead, it is learned from system performance data by applying artificial neural networks to establish the completion time prediction of batch jobs within the cloud environments. We compare our model with three other baseline models (queueing-theoretic model, operational analysis, and an empirical method) on historical measurements of job completion time and CPU run-queue size (i.e., the number of active threads in the system). The proposed model captures multithreading, operating system scheduling, sleeping time, and job priorities. A validation based on 4500 experiments based on the DaCapo benchmarking suite was carried out, confirming the predictive efficiency and capabilities of the proposed model by achieving up to 10% MAPE compared with the other models.Open Acces

    Outlier detection techniques for wireless sensor networks: A survey

    Get PDF
    In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree
    corecore