48,953 research outputs found
Short-term Memory of Deep RNN
The extension of deep learning towards temporal data processing is gaining an
increasing research interest. In this paper we investigate the properties of
state dynamics developed in successive levels of deep recurrent neural networks
(RNNs) in terms of short-term memory abilities. Our results reveal interesting
insights that shed light on the nature of layering as a factor of RNN design.
Noticeably, higher layers in a hierarchically organized RNN architecture
results to be inherently biased towards longer memory spans even prior to
training of the recurrent connections. Moreover, in the context of Reservoir
Computing framework, our analysis also points out the benefit of a layered
recurrent organization as an efficient approach to improve the memory skills of
reservoir models.Comment: This is a pre-print (pre-review) version of the paper accepted for
presentation at the 26th European Symposium on Artificial Neural Networks,
Computational Intelligence and Machine Learning (ESANN), Bruges (Belgium),
25-27 April 201
Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks
Recurrent neural networks (RNNs) in the brain and in silico excel at solving
tasks with intricate temporal dependencies. Long timescales required for
solving such tasks can arise from properties of individual neurons
(single-neuron timescale, , e.g., membrane time constant in biological
neurons) or recurrent interactions among them (network-mediated timescale).
However, the contribution of each mechanism for optimally solving
memory-dependent tasks remains poorly understood. Here, we train RNNs to solve
-parity and -delayed match-to-sample tasks with increasing memory
requirements controlled by by simultaneously optimizing recurrent weights
and s. We find that for both tasks RNNs develop longer timescales with
increasing , but depending on the learning objective, they use different
mechanisms. Two distinct curricula define learning objectives: sequential
learning of a single- (single-head) or simultaneous learning of multiple
s (multi-head). Single-head networks increase their with and are
able to solve tasks for large , but they suffer from catastrophic
forgetting. However, multi-head networks, which are explicitly required to hold
multiple concurrent memories, keep constant and develop longer
timescales through recurrent connectivity. Moreover, we show that the
multi-head curriculum increases training speed and network stability to
ablations and perturbations, and allows RNNs to generalize better to tasks
beyond their training regime. This curriculum also significantly improves
training GRUs and LSTMs for large- tasks. Our results suggest that adapting
timescales to task requirements via recurrent interactions allows learning more
complex objectives and improves the RNN's performance
Incremental Training of a Recurrent Neural Network Exploiting a Multi-Scale Dynamic Memory
The effectiveness of recurrent neural networks can be largely influenced by
their ability to store into their dynamical memory information extracted from
input sequences at different frequencies and timescales. Such a feature can be
introduced into a neural architecture by an appropriate modularization of the
dynamic memory. In this paper we propose a novel incrementally trained
recurrent architecture targeting explicitly multi-scale learning. First, we
show how to extend the architecture of a simple RNN by separating its hidden
state into different modules, each subsampling the network hidden activations
at different frequencies. Then, we discuss a training algorithm where new
modules are iteratively added to the model to learn progressively longer
dependencies. Each new module works at a slower frequency than the previous
ones and it is initialized to encode the subsampled sequence of hidden
activations. Experimental results on synthetic and real-world datasets on
speech recognition and handwritten characters show that the modular
architecture and the incremental training algorithm improve the ability of
recurrent neural networks to capture long-term dependencies.Comment: accepted @ ECML 2020. arXiv admin note: substantial text overlap with
arXiv:2001.1177
An extension of transformer neural networks in the context of multivariate stochastic processes
Increasingly, artificial neural networks are explored to learn relationships among temporal sequence data for purposes of classification, prediction, and anomaly detection with the hope of exceeding the performance of more traditional machine learning algorithms. While the underlying Long Short-Term Memory or Gated Recurrent Unit networks are still the preferred choices by many researchers, such recurrent networks are sub-optimal to learn relationships within and across longer sequences. Transformer neural networks, originally designed to improve the performance of natural language processing tasks, pose an interesting alternative as their attention mechanisms are more capable of capturing context and meaning within longer sequences. Such features present opportunities to apply transformer networks also to temporal sequence data of financial asset prices. This thesis introduces an extension of the original transformer neural network which is capable of multivariate time series representation learning in a supervised learning context and attempts to train temporal sequences of financial asset prices. The prediction accuracy of the transformer extension exceeds two of the most popular recurrent neural networks used for temporal sequence data prediction. The experiments are conducted in the context of a trading algorithm that showcases the practical potential and its implications. As the model is not input data specific, opportunities to transfer enhancements to other domains exist
Deep Learning algorithms for solving high dimensional nonlinear Backward Stochastic Differential Equations
We study deep learning-based schemes for solving high dimensional nonlinear
backward stochastic differential equations (BSDEs). First we show how to
improve the performances of the proposed scheme in [W. E and J. Han and A.
Jentzen, Commun. Math. Stat., 5 (2017), pp.349-380] regarding computational
time by using a single neural network architecture instead of the stacked deep
neural networks. Furthermore, those schemes can be stuck in poor local minima
or diverges, especially for a complex solution structure and longer terminal
time. To solve this problem, we investigate to reformulate the problem by
including local losses and exploit the Long Short Term Memory (LSTM) networks
which are a type of recurrent neural networks (RNN). Finally, in order to study
numerical convergence and thus illustrate the improved performances with the
proposed methods, we provide numerical results for several 100-dimensional
nonlinear BSDEs including nonlinear pricing problems in finance.Comment: 21 pages, 5 figures, 16 table
- …