898 research outputs found

    Classification of Time-Series Images Using Deep Convolutional Neural Networks

    Full text link
    Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.Comment: The 10th International Conference on Machine Vision (ICMV 2017

    Characterisation of Dynamic Process Systems by Use of Recurrence Texture Analysis

    Get PDF
    This thesis proposes a method to analyse the dynamic behaviour of process systems using sets of textural features extracted from distance matrices obtained from time series data. Algorithms based on the use of grey level co-occurrence matrices, wavelet transforms, local binary patterns, textons, and the pretrained convolutional neural networks (AlexNet and VGG16) were used to extract features. The method was demonstrated to effectively capture the dynamics of mineral process systems and could outperform competing approaches

    Structured Sequence Modeling with Graph Convolutional Recurrent Networks

    Full text link
    This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep learning model able to predict structured sequences of data. Precisely, GCRN is a generalization of classical recurrent neural networks (RNN) to data structured by an arbitrary graph. Such structured sequences can represent series of frames in videos, spatio-temporal measurements on a network of sensors, or random walks on a vocabulary graph for natural language modeling. The proposed model combines convolutional neural networks (CNN) on graphs to identify spatial structures and RNN to find dynamic patterns. We study two possible architectures of GCRN, and apply the models to two practical problems: predicting moving MNIST data, and modeling natural language with the Penn Treebank dataset. Experiments show that exploiting simultaneously graph spatial and dynamic information about data can improve both precision and learning speed

    Monitoring In-Home Emergency Situation and Preserve Privacy using Multi-modal Sensing and Deep Learning

    Get PDF
    Videos and images are commonly used in home monitoring systems. However, detecting emergencies in-home while preserving privacy is a challenging task concerning Human Activity Recognition (HAR). In recent years, HAR combined with deep learning has drawn much attention from the general public. Besides that, relying entirely on a single sensor modal-ity is not promising. In this paper, depth images and radar presence data were used to investigate if such sensor data can tackle the challenge of a system's ability to detect abnormal and normal situations while preserving privacy. The recurrence plots and wavelet transformations were used to make a two-dimensional representation of the presence radar data. Moreover, we fused data from both sensors using data-level, feature-level, and decision-level fusions. The decision-level fusion showed its superiority over the other two techniques. For the decision-level fusion, a combination of the depth images and presence data recurrence plots trained first on convolutional neural networks (CNN). The output was fed into support vector machines, which yielded the best accuracy of 99.98%.acceptedVersio

    Neural activity classification with machine learning models trained on interspike interval series data

    Full text link
    The flow of information through the brain is reflected by the activity patterns of neural cells. Indeed, these firing patterns are widely used as input data to predictive models that relate stimuli and animal behavior to the activity of a population of neurons. However, relatively little attention was paid to single neuron spike trains as predictors of cell or network properties in the brain. In this work, we introduce an approach to neuronal spike train data mining which enables effective classification and clustering of neuron types and network activity states based on single-cell spiking patterns. This approach is centered around applying state-of-the-art time series classification/clustering methods to sequences of interspike intervals recorded from single neurons. We demonstrate good performance of these methods in tasks involving classification of neuron type (e.g. excitatory vs. inhibitory cells) and/or neural circuit activity state (e.g. awake vs. REM sleep vs. nonREM sleep states) on an open-access cortical spiking activity dataset

    MMF-DRL: Multimodal Fusion-Deep Reinforcement Learning Approach with Domain-Specific Features for Classifying Time Series Data

    Get PDF
    This research focuses on addressing two pertinent problems in machine learning (ML) which are (a) the supervised classification of time series and (b) the need for large amounts of labeled images for training supervised classifiers. The novel contributions are two-fold. The first problem of time series classification is addressed by proposing to transform time series into domain-specific 2D features such as scalograms and recurrence plot (RP) images. The second problem which is the need for large amounts of labeled image data, is tackled by proposing a new way of using a reinforcement learning (RL) technique as a supervised classifier by using multimodal (joint representation) scalograms and RP images. The motivation for using such domain-specific features is that they provide additional information to the ML models by capturing domain-specific features (patterns) and also help in taking advantage of state-of-the-art image classifiers for learning the patterns from these textured images. Thus, this research proposes a multimodal fusion (MMF) - deep reinforcement learning (DRL) approach as an alternative technique to traditional supervised image classifiers for the classification of time series. The proposed MMF-DRL approach produces improved accuracy over state-of-the-art supervised learning models while needing fewer training data. Results show the merit of using multiple modalities and RL in achieving improved performance than training on a single modality. Moreover, the proposed approach yields the highest accuracy of 90.20% and 89.63% respectively for two physiological time series datasets with fewer training data in contrast to the state-of-the-art supervised learning model ChronoNet which gave 87.62% and 88.02% accuracy respectively for the two datasets with more training data

    Time Series Classification Using Images

    Get PDF
    This work is a contribution to the field of time series classification. We propose a novel method that transforms time series into multi-channel images, which are then classified using Convolutional Neural Networks as an at-hand classifier. We present different variants of the proposed method. Time series with different characteristics are studied in this paper: univariate, multivariate, and varying lengths. Several selected methods of time-series-to-image transformation are considered, taking into account the original series values, value changes (first differentials), and changes in value changes (second differentials). In the paper, we present an empirical study demonstrating the quality of time series classification using the proposed approach

    Discriminative Recurrent Sparse Auto-Encoders

    Full text link
    We present the discriminative recurrent sparse auto-encoder model, comprising a recurrent encoder of rectified linear units, unrolled for a fixed number of iterations, and connected to two linear decoders that reconstruct the input and predict its supervised classification. Training via backpropagation-through-time initially minimizes an unsupervised sparse reconstruction error; the loss function is then augmented with a discriminative term on the supervised classification. The depth implicit in the temporally-unrolled form allows the system to exhibit all the power of deep networks, while substantially reducing the number of trainable parameters. From an initially unstructured network the hidden units differentiate into categorical-units, each of which represents an input prototype with a well-defined class; and part-units representing deformations of these prototypes. The learned organization of the recurrent encoder is hierarchical: part-units are driven directly by the input, whereas the activity of categorical-units builds up over time through interactions with the part-units. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders achieve excellent performance on MNIST.Comment: Added clarifications suggested by reviewers. 15 pages, 10 figure
    • …
    corecore