1,641 research outputs found
Streaming Adaptation of Deep Forecasting Models using Adaptive Recurrent Units
We present ARU, an Adaptive Recurrent Unit for streaming adaptation of deep
globally trained time-series forecasting models. The ARU combines the
advantages of learning complex data transformations across multiple time series
from deep global models, with per-series localization offered by closed-form
linear models. Unlike existing methods of adaptation that are either
memory-intensive or non-responsive after training, ARUs require only fixed
sized state and adapt to streaming data via an easy RNN-like update operation.
The core principle driving ARU is simple --- maintain sufficient statistics of
conditional Gaussian distributions and use them to compute local parameters in
closed form. Our contribution is in embedding such local linear models in
globally trained deep models while allowing end-to-end training on the one
hand, and easy RNN-like updates on the other. Across several datasets we show
that ARU is more effective than recently proposed local adaptation methods that
tax the global network to compute local parameters.Comment: 9 pages, 4 figure
Data streams classification using deep learning under different speeds and drifts
Processing data streams arriving at high speed requires the development of models that can provide fast and accurate
predictions. Although deep neural networks are the state-of-the-art for many machine learning tasks, their performance in
real-time data streaming scenarios is a research area that has not yet been fully addressed. Nevertheless, much effort has
been put into the adaption of complex deep learning (DL) models to streaming tasks by reducing the processing time. The
design of the asynchronous dual-pipeline DL framework allows making predictions of incoming instances and updating the
model simultaneously, using two separate layers. The aim of this work is to assess the performance of different types of DL
architectures for data streaming classification using this framework. We evaluate models such as multi-layer perceptrons,
recurrent, convolutional and temporal convolutional neural networks over several time series datasets that are simulated as
streams at different speeds. In addition, we evaluate how the different architectures react to concept drifts typically found in
evolving data streams. The obtained results indicate that convolutional architectures achieve a higher performance in terms
of accuracy and efficiency, but are also the most sensitive to concept drifts.Ministerio de Ciencia, Innovación y Universidades PID2020-117954RB-C22Junta de Andalucía US-1263341Junta de Andalucía P18-RT-277
On the performance of deep learning models for time series classification in streaming
Processing data streams arriving at high speed requires the development of
models that can provide fast and accurate predictions. Although deep neural
networks are the state-of-the-art for many machine learning tasks, their
performance in real-time data streaming scenarios is a research area that has
not yet been fully addressed. Nevertheless, there have been recent efforts to
adapt complex deep learning models for streaming tasks by reducing their
processing rate. The design of the asynchronous dual-pipeline deep learning
framework allows to predict over incoming instances and update the model
simultaneously using two separate layers. The aim of this work is to assess the
performance of different types of deep architectures for data streaming
classification using this framework. We evaluate models such as multi-layer
perceptrons, recurrent, convolutional and temporal convolutional neural
networks over several time-series datasets that are simulated as streams. The
obtained results indicate that convolutional architectures achieve a higher
performance in terms of accuracy and efficiency.Comment: Paper submitted to the 15th International Conference on Soft
Computing Models in Industrial and Environmental Applications (SOCO 2020
On the performance of deep learning models for time series classification in streaming
Processing data streams arriving at high speed requires the
development of models that can provide fast and accurate predictions.
Although deep neural networks are the state-of-the-art for many machine
learning tasks, their performance in real-time data streaming scenarios
is a research area that has not yet been fully addressed. Nevertheless,
there have been recent efforts to adapt complex deep learning models
for streaming tasks by reducing their processing rate. The design of the
asynchronous dual-pipeline deep learning framework allows to predict
over incoming instances and update the model simultaneously using two
separate layers. The aim of this work is to assess the performance of different
types of deep architectures for data streaming classification using
this framework. We evaluate models such as multi-layer perceptrons, recurrent,
convolutional and temporal convolutional neural networks over
several time-series datasets that are simulated as streams. The obtained
results indicate that convolutional architectures achieve a higher performance
in terms of accuracy and efficiency.Ministerio de Economía y Competitividad TIN2017-88209-C2-2-RJunta de Andalucía US-1263341Junta de Andalucía P18-RT-277
Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks
The emergence of brain-inspired neuromorphic computing as a paradigm for edge
AI is motivating the search for high-performance and efficient spiking neural
networks to run on this hardware. However, compared to classical neural
networks in deep learning, current spiking neural networks lack competitive
performance in compelling areas. Here, for sequential and streaming tasks, we
demonstrate how a novel type of adaptive spiking recurrent neural network
(SRNN) is able to achieve state-of-the-art performance compared to other
spiking neural networks and almost reach or exceed the performance of classical
recurrent neural networks (RNNs) while exhibiting sparse activity. From this,
we calculate a 100x energy improvement for our SRNNs over classical RNNs on
the harder tasks. To achieve this, we model standard and adaptive
multiple-timescale spiking neurons as self-recurrent neural units, and leverage
surrogate gradients and auto-differentiation in the PyTorch Deep Learning
framework to efficiently implement backpropagation-through-time, including
learning of the important spiking neuron parameters to adapt our spiking
neurons to the tasks.Comment: 11 pages,5 figure
- …