512 research outputs found

    Predicting Remaining Useful Life using Time Series Embeddings based on Recurrent Neural Networks

    Get PDF
    We consider the problem of estimating the remaining useful life (RUL) of a system or a machine from sensor data. Many approaches for RUL estimation based on sensor data make assumptions about how machines degrade. Additionally, sensor data from machines is noisy and often suffers from missing values in many practical settings. We propose Embed-RUL: a novel approach for RUL estimation from sensor data that does not rely on any degradation-trend assumptions, is robust to noise, and handles missing values. Embed-RUL utilizes a sequence-to-sequence model based on Recurrent Neural Networks (RNNs) to generate embeddings for multivariate time series subsequences. The embeddings for normal and degraded machines tend to be different, and are therefore found to be useful for RUL estimation. We show that the embeddings capture the overall pattern in the time series while filtering out the noise, so that the embeddings of two machines with similar operational behavior are close to each other, even when their sensor readings have significant and varying levels of noise content. We perform experiments on publicly available turbofan engine dataset and a proprietary real-world dataset, and demonstrate that Embed-RUL outperforms the previously reported state-of-the-art on several metrics.Comment: Presented at 2nd ML for PHM Workshop at SIGKDD 2017, Halifax, Canad

    Deep learning for time series classification: a review

    Get PDF
    Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.Comment: Accepted at Data Mining and Knowledge Discover

    MultiWave: Multiresolution Deep Architectures through Wavelet Decomposition for Multivariate Time Series Prediction

    Full text link
    The analysis of multivariate time series data is challenging due to the various frequencies of signal changes that can occur over both short and long terms. Furthermore, standard deep learning models are often unsuitable for such datasets, as signals are typically sampled at different rates. To address these issues, we introduce MultiWave, a novel framework that enhances deep learning time series models by incorporating components that operate at the intrinsic frequencies of signals. MultiWave uses wavelets to decompose each signal into subsignals of varying frequencies and groups them into frequency bands. Each frequency band is handled by a different component of our model. A gating mechanism combines the output of the components to produce sparse models that use only specific signals at specific frequencies. Our experiments demonstrate that MultiWave accurately identifies informative frequency bands and improves the performance of various deep learning models, including LSTM, Transformer, and CNN-based models, for a wide range of applications. It attains top performance in stress and affect detection from wearables. It also increases the AUC of the best-performing model by 5% for in-hospital COVID-19 mortality prediction from patient blood samples and for human activity recognition from accelerometer and gyroscope data. We show that MultiWave consistently identifies critical features and their frequency components, thus providing valuable insights into the applications studied.Comment: Published in the Conference on Health, Inference, and Learning (CHIL 2023

    Statistical signal processing for echo signals from ultrasound linear and nonlinear scatterers

    Get PDF

    Methods for cleaning the BOLD fMRI signal

    Get PDF
    Available online 9 December 2016 http://www.sciencedirect.com/science/article/pii/S1053811916307418?via%3Dihubhttp://www.sciencedirect.com/science/article/pii/S1053811916307418?via%3DihubBlood oxygen-level-dependent functional magnetic resonance imaging (BOLD fMRI) has rapidly become a popular technique for the investigation of brain function in healthy individuals, patients as well as in animal studies. However, the BOLD signal arises from a complex mixture of neuronal, metabolic and vascular processes, being therefore an indirect measure of neuronal activity, which is further severely corrupted by multiple non-neuronal fluctuations of instrumental, physiological or subject-specific origin. This review aims to provide a comprehensive summary of existing methods for cleaning the BOLD fMRI signal. The description is given from a methodological point of view, focusing on the operation of the different techniques in addition to pointing out the advantages and limitations in their application. Since motion-related and physiological noise fluctuations are two of the main noise components of the signal, techniques targeting their removal are primarily addressed, including both data-driven approaches and using external recordings. Data-driven approaches, which are less specific in the assumed model and can simultaneously reduce multiple noise fluctuations, are mainly based on data decomposition techniques such as principal and independent component analysis. Importantly, the usefulness of strategies that benefit from the information available in the phase component of the signal, or in multiple signal echoes is also highlighted. The use of global signal regression for denoising is also addressed. Finally, practical recommendations regarding the optimization of the preprocessing pipeline for the purpose of denoising and future venues of research are indicated. Through the review, we summarize the importance of signal denoising as an essential step in the analysis pipeline of task-based and resting state fMRI studies.This work was supported by the Spanish Ministry of Economy and Competitiveness [Grant PSI 2013–42343 Neuroimagen Multimodal], the Severo Ochoa Programme for Centres/Units of Excellence in R & D [SEV-2015-490], and the research and writing of the paper were supported by the NIMH and NINDS Intramural Research Programs (ZICMH002888) of the NIH/HHS

    Wind power prediction based on WT-BiGRU-attention-TCN model

    Get PDF
    Accurate wind power prediction is crucial for the safe and stable operation of the power grid. However, wind power generation has large random volatility and intermittency, which increases the difficulty of prediction. In order to construct an effective prediction model based on wind power generation power and achieve stable grid dispatch after wind power is connected to the grid, a wind power generation prediction model based on WT-BiGRU-Attention-TCN is proposed. First, wavelet transform (WT) is used to reduce noises of the sample data. Then, the temporal attention mechanism is incorporated into the bi-directional gated recurrent unit (BiGRU) model to highlight the impact of key time steps on the prediction results while fully extracting the temporal features of the context. Finally, the model performance is enhanced by further extracting more high-level temporal features through a temporal convolutional neural network (TCN). The results show that our proposed model outperforms other baseline models, achieving a root mean square error of 0.066 MW, a mean absolute percentage error of 18.876%, and the coefficient of determination (R2) reaches 0.976. It indicates that the noise-reduction WT technique can significantly improve the model performance, and also shows that using the temporal attention mechanism and TCN can further improve the prediction accuracy
    corecore