50,685 research outputs found
Predictive-State Decoders: Encoding the Future into Recurrent Networks
Recurrent neural networks (RNNs) are a vital modeling technique that rely on
internal states learned indirectly by optimization of a supervised,
unsupervised, or reinforcement training loss. RNNs are used to model dynamic
processes that are characterized by underlying latent states whose form is
often unknown, precluding its analytic representation inside an RNN. In the
Predictive-State Representation (PSR) literature, latent state processes are
modeled by an internal state representation that directly models the
distribution of future observations, and most recent work in this area has
relied on explicitly representing and targeting sufficient statistics of this
probability distribution. We seek to combine the advantages of RNNs and PSRs by
augmenting existing state-of-the-art recurrent neural networks with
Predictive-State Decoders (PSDs), which add supervision to the network's
internal state representation to target predicting future observations.
Predictive-State Decoders are simple to implement and easily incorporated into
existing training pipelines via additional loss regularization. We demonstrate
the effectiveness of PSDs with experimental results in three different domains:
probabilistic filtering, Imitation Learning, and Reinforcement Learning. In
each, our method improves statistical performance of state-of-the-art recurrent
baselines and does so with fewer iterations and less data.Comment: NIPS 201
The Power of Linear Recurrent Neural Networks
Recurrent neural networks are a powerful means to cope with time series. We
show how a type of linearly activated recurrent neural networks, which we call
predictive neural networks, can approximate any time-dependent function f(t)
given by a number of function values. The approximation can effectively be
learned by simply solving a linear equation system; no backpropagation or
similar methods are needed. Furthermore, the network size can be reduced by
taking only most relevant components. Thus, in contrast to others, our approach
not only learns network weights but also the network architecture. The networks
have interesting properties: They end up in ellipse trajectories in the long
run and allow the prediction of further values and compact representations of
functions. We demonstrate this by several experiments, among them multiple
superimposed oscillators (MSO), robotic soccer, and predicting stock prices.
Predictive neural networks outperform the previous state-of-the-art for the MSO
task with a minimal number of units.Comment: 22 pages, 14 figures and tables, revised implementatio
Modular Echo State Neural Networks in Time Series Prediction
Echo State neural networks (ESN), which are a special case of recurrent neural networks, are studied from the viewpoint of their learning ability, with a goal to achieve their greater predictive ability. In this paper we study the influence of the memory length on predictive abilities of Echo State neural networks. The conclusion is that Echo State neural networks with fixed memory length can have troubles with adaptation of its intrinsic dynamics to dynamics of the prediction task. Therefore, we have tried to create complex prediction system as a combination of the local expert Echo State neural networks with different memory length and one special gating Echo State neural network. This approach was tested in laser fluctuations and turbojet gas temperature prediction. The prediction error achieved by this approach was substantially smaller in comparison with prediction error achieved by standard Echo State neural networks
Echo State Networks: analysis, training and predictive control
The goal of this paper is to investigate the theoretical properties, the
training algorithm, and the predictive control applications of Echo State
Networks (ESNs), a particular kind of Recurrent Neural Networks. First, a
condition guaranteeing incremetal global asymptotic stability is devised. Then,
a modified training algorithm allowing for dimensionality reduction of ESNs is
presented. Eventually, a model predictive controller is designed to solve the
tracking problem, relying on ESNs as the model of the system. Numerical results
concerning the predictive control of a nonlinear process for pH neutralization
confirm the effectiveness of the proposed algorithms for the identification,
dimensionality reduction, and the control design for ESNs.Comment: 6 pages,5 figures, submitted to European Control Conference (ECC
- …