10 research outputs found
Deep learning for time series classification: a review
Time Series Classification (TSC) is an important and challenging problem in
data mining. With the increase of time series data availability, hundreds of
TSC algorithms have been proposed. Among these methods, only a few have
considered Deep Neural Networks (DNNs) to perform this task. This is surprising
as deep learning has seen very successful applications in the last years. DNNs
have indeed revolutionized the field of computer vision especially with the
advent of novel deeper architectures such as Residual and Convolutional Neural
Networks. Apart from images, sequential data such as text and audio can also be
processed with DNNs to reach state-of-the-art performance for document
classification and speech recognition. In this article, we study the current
state-of-the-art performance of deep learning algorithms for TSC by presenting
an empirical study of the most recent DNN architectures for TSC. We give an
overview of the most successful deep learning applications in various time
series domains under a unified taxonomy of DNNs for TSC. We also provide an
open source deep learning framework to the TSC community where we implemented
each of the compared approaches and evaluated them on a univariate TSC
benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By
training 8,730 deep learning models on 97 time series datasets, we propose the
most exhaustive study of DNNs for TSC to date.Comment: Accepted at Data Mining and Knowledge Discover
Reservoir computing approaches for representation and classification of multivariate time series
Classification of multivariate time series (MTS) has been tackled with a
large variety of methodologies and applied to a wide range of scenarios.
Reservoir Computing (RC) provides efficient tools to generate a vectorial,
fixed-size representation of the MTS that can be further processed by standard
classifiers. Despite their unrivaled training speed, MTS classifiers based on a
standard RC architecture fail to achieve the same accuracy of fully trainable
neural networks. In this paper we introduce the reservoir model space, an
unsupervised approach based on RC to learn vectorial representations of MTS.
Each MTS is encoded within the parameters of a linear model trained to predict
a low-dimensional embedding of the reservoir dynamics. Compared to other RC
methods, our model space yields better representations and attains comparable
computational performance, thanks to an intermediate dimensionality reduction
procedure. As a second contribution we propose a modular RC framework for MTS
classification, with an associated open-source Python library. The framework
provides different modules to seamlessly implement advanced RC architectures.
The architectures are compared to other MTS classifiers, including deep
learning models and time series kernels. Results obtained on benchmark and
real-world MTS datasets show that RC classifiers are dramatically faster and,
when implemented using our proposed representation, also achieve superior
classification accuracy
Optimal Input Representation in Neural Systems at the Edge of Chaos
Shedding light on how biological systems represent, process and store information in noisy
environments is a key and challenging goal. A stimulating, though controversial, hypothesis poses
that operating in dynamical regimes near the edge of a phase transition, i.e., at criticality or the âedge
of chaosâ, can provide information-processing living systems with important operational advantages,
creating, e.g., an optimal trade-off between robustness and flexibility. Here, we elaborate on a recent
theoretical result, which establishes that the spectrum of covariance matrices of neural networks
representing complex inputs in a robust way needs to decay as a power-law of the rank, with an
exponent close to unity, a result that has been indeed experimentally verified in neurons of the mouse
visual cortex. Aimed at understanding and mimicking these results, we construct an artificial neural
network and train it to classify images. We find that the best performance in such a task is obtained
when the network operates near the critical point, at which the eigenspectrum of the covariance
matrix follows the very same statistics as actual neurons do. Thus, we conclude that operating near
criticality can also haveâbesides the usually alleged virtuesâthe advantage of allowing for flexible,
robust and efficient input representations.The Spanish Ministry and Agencia Estatal de investigaciĂłn
(AEI) through grant FIS2017-84256-P (European Regional Development Fund)âConsejerĂa de Conocimiento, InvestigaciĂłn Universidad, Junta de AndalucĂaâ and European Regional
Development Fund, Project Ref. A-FQM-175-UGR18 and Project Ref. P20-0017
Deep Learning for Time Series Classification and Extrinsic Regression: A Current Survey
Time Series Classification and Extrinsic Regression are important and
challenging machine learning tasks. Deep learning has revolutionized natural
language processing and computer vision and holds great promise in other fields
such as time series analysis where the relevant features must often be
abstracted from the raw data but are not known a priori. This paper surveys the
current state of the art in the fast-moving field of deep learning for time
series classification and extrinsic regression. We review different network
architectures and training methods used for these tasks and discuss the
challenges and opportunities when applying deep learning to time series data.
We also summarize two critical applications of time series classification and
extrinsic regression, human activity recognition and satellite earth
observation
Deep learning for time series classification
Time series analysis is a field of data science which is interested in
analyzing sequences of numerical values ordered in time. Time series are
particularly interesting because they allow us to visualize and understand the
evolution of a process over time. Their analysis can reveal trends,
relationships and similarities across the data. There exists numerous fields
containing data in the form of time series: health care (electrocardiogram,
blood sugar, etc.), activity recognition, remote sensing, finance (stock market
price), industry (sensors), etc. Time series classification consists of
constructing algorithms dedicated to automatically label time series data. The
sequential aspect of time series data requires the development of algorithms
that are able to harness this temporal property, thus making the existing
off-the-shelf machine learning models for traditional tabular data suboptimal
for solving the underlying task. In this context, deep learning has emerged in
recent years as one of the most effective methods for tackling the supervised
classification task, particularly in the field of computer vision. The main
objective of this thesis was to study and develop deep neural networks
specifically constructed for the classification of time series data. We thus
carried out the first large scale experimental study allowing us to compare the
existing deep methods and to position them compared other non-deep learning
based state-of-the-art methods. Subsequently, we made numerous contributions in
this area, notably in the context of transfer learning, data augmentation,
ensembling and adversarial attacks. Finally, we have also proposed a novel
architecture, based on the famous Inception network (Google), which ranks among
the most efficient to date.Comment: PhD thesi
Time series classification in reservoir- and model-space
We evaluate two approaches for time series classification based on reservoir computing. In the first, classical approach, time series are represented by reservoir activations. In the second approach, on top of the reservoir activations, a predictive model in the form of a readout for one-step-ahead-prediction is trained for each time series. This learning step lifts the reservoir features to a more sophisticated model space. Classification is then based on the predictive model parameters describing each time series. We provide an in-depth analysis on time series classification in reservoir- and model-space. The approaches are evaluated on 43 univariate and 18 multivariate time series. The results show that representing multivariate time series in the model space leads to lower classification errors compared to using the reservoir activations directly as features. The classification accuracy on the univariate datasets can be improved by combining reservoir- and model-space
Time Series Classification in Reservoir- and Model-Space: A Comparison
Aswolinskiy W, Reinhart F, Steil JJ. Time Series Classification in Reservoir- and Model-Space: A Comparison. In: Proceedings of the 7th IAPR Workshop on Artificial Neural Networks in Pattern Recognition. 2016