111 research outputs found
GAF-CNN-LSTM for Multivariate Time- Series Images Forecasting
International audienceForecasting multivariate time series is challenging for a whole host of reasons not limited to problem features such as having multiple input variables, time series preparation, and the need to perform the same type of prediction for multiple physical sites. Although the literature on time series forecasting is focused on 1D signals. We use the Gramian Angular Fields (GAFs) to encode time series into 2D texture images, later take advantage of the deep CNN-LSTM architecture where LSTM uses a CNN as front end. Thus, we propose a novel unified framework for forecasting multivariate time series using a way to encode time series as images. Preliminary experimental results on the UEA multivariate time series forecasting archive, demonstrate competitive forecast accuracy (RMSE and MAPE) of the proposed approach, compared to the existing deep approaches as LSTM, CRNN, 1D-MTCNN
Classification of Time-Series Images Using Deep Convolutional Neural Networks
Convolutional Neural Networks (CNN) has achieved a great success in image
recognition task by automatically learning a hierarchical feature
representation from raw data. While the majority of Time-Series Classification
(TSC) literature is focused on 1D signals, this paper uses Recurrence Plots
(RP) to transform time-series into 2D texture images and then take advantage of
the deep CNN classifier. Image representation of time-series introduces
different feature types that are not available for 1D signals, and therefore
TSC can be treated as texture image recognition task. CNN model also allows
learning different levels of representations together with a classifier,
jointly and automatically. Therefore, using RP and CNN in a unified framework
is expected to boost the recognition rate of TSC. Experimental results on the
UCR time-series classification archive demonstrate competitive accuracy of the
proposed approach, compared not only to the existing deep architectures, but
also to the state-of-the art TSC algorithms.Comment: The 10th International Conference on Machine Vision (ICMV 2017
Human activity recognition with inertial sensors using a deep learning approach
Our focus in this research is on the use of deep learning approaches for human activity recognition (HAR) scenario, in which inputs are multichannel time series signals acquired from a set of body-worn inertial sensors and outputs are predefined human activities. Here, we present a feature learning method that deploys convolutional neural networks (CNN) to automate feature learning from the raw inputs in a systematic way. The influence of various important hyper-parameters such as number of convolutional layers and kernel size on the performance of CNN was monitored. Experimental results indicate that CNNs achieved significant speed-up in computing and deciding the final class and marginal improvement in overall classification accuracy compared to the baseline models such as Support Vector Machines and Multi-layer perceptron networks
Deep HMResNet Model for Human Activity-Aware Robotic Systems
Endowing the robotic systems with cognitive capabilities for recognizing
daily activities of humans is an important challenge, which requires
sophisticated and novel approaches. Most of the proposed approaches explore
pattern recognition techniques which are generally based on hand-crafted
features or learned features. In this paper, a novel Hierarchal Multichannel
Deep Residual Network (HMResNet) model is proposed for robotic systems to
recognize daily human activities in the ambient environments. The introduced
model is comprised of multilevel fusion layers. The proposed Multichannel 1D
Deep Residual Network model is, at the features level, combined with a
Bottleneck MLP neural network to automatically extract robust features
regardless of the hardware configuration and, at the decision level, is fully
connected with an MLP neural network to recognize daily human activities.
Empirical experiments on real-world datasets and an online demonstration are
used for validating the proposed model. Results demonstrated that the proposed
model outperforms the baseline models in daily human activity recognition.Comment: Presented at AI-HRI AAAI-FSS, 2018 (arXiv:1809.06606
Understanding and Improving Recurrent Networks for Human Activity Recognition by Continuous Attention
Deep neural networks, including recurrent networks, have been successfully
applied to human activity recognition. Unfortunately, the final representation
learned by recurrent networks might encode some noise (irrelevant signal
components, unimportant sensor modalities, etc.). Besides, it is difficult to
interpret the recurrent networks to gain insight into the models' behavior. To
address these issues, we propose two attention models for human activity
recognition: temporal attention and sensor attention. These two mechanisms
adaptively focus on important signals and sensor modalities. To further improve
the understandability and mean F1 score, we add continuity constraints,
considering that continuous sensor signals are more robust than discrete ones.
We evaluate the approaches on three datasets and obtain state-of-the-art
results. Furthermore, qualitative analysis shows that the attention learned by
the models agree well with human intuition.Comment: 8 pages. published in The International Symposium on Wearable
Computers (ISWC) 201
- …