58,706 research outputs found

    A Dual-Stream architecture based on Neural Turing Machine and Attention for the Remaining Useful Life Estimation problem

    Get PDF
    Estimating in a reliable way the Remaining Useful Life (RUL) of a mechanical component is a fundamental task in the field of Prognostics and Health Management (PHM). In recent years a greater availability of high quality sensors and easiness of data gathering gave rise to data-driven models based on deep learning for this task, which has recently seen the introduction of \u201cdual-stream\u201d architectures. In this paper we propose a dual-stream architecture to address the RUL estimation problem through the exploitation of a Neural Turing Machine (NTM) and a Multi-Head Attention (MHA) mechanism. The NTM is a content-based memory addressing system which gives each of the streams the ability to access to and interact with the memory and acts as a fusion technique. The MHA is an attention mechanism added as a mean for our architecture to identify the existing relations between different sensor data in order to reveal hidden patterns among them. To evaluate the performance of our model, we considered the C-MAPSS dataset, a benchmark dataset published by NASA consisting of several time series related to the life of turbofan engines. We show that our approach achieves the best prediction score (which measures the safety of the predictions) in the available literature on two of the C-MAPSS subdatasets

    ART Neural Networks for Remote Sensing Image Analysis

    Full text link
    ART and ARTMAP neural networks for adaptive recognition and prediction have been applied to a variety of problems, including automatic mapping from remote sensing satellite measurements, parts design retrieval at the Boeing Company, medical database prediction, and robot vision. This paper features a self-contained introduction to ART and ARTMAP dynamics. An application of these networks to image processing is illustrated by means of a remote sensing example. The basic ART and ARTMAP networks feature winner-take-all (WTA) competitive coding, which groups inputs into discrete recognition categories. WTA coding in these networks enables fast learning, which allows the network to encode important rare cases but which may lead to inefficient category proliferation with noisy training inputs. This problem is partially solved by ART-EMAP, which use WTA coding for learning but distributed category representations for test-set prediction. Recently developed ART models (dART and dARTMAP) retain stable coding, recognition, and prediction, but allow arbitrarily distributed category representation during learning as well as performance
    • …
    corecore