70,715 research outputs found
Modelling and control of chaotic processes through their Bifurcation Diagrams generated with the help of Recurrent Neural Network models: Part 1—simulation studies
Many real-world processes tend to be chaotic and also do not lead to satisfactory analytical modelling. It has been shown here that for such chaotic processes represented through short chaotic noisy time-series, a multi-input and multi-output recurrent neural networks model can be built which is capable of capturing the process trends and predicting the future values from any given starting condition. It is further shown that this capability can be achieved by the Recurrent Neural Network model when it is trained to very low value of mean squared error. Such a model can then be used for constructing the Bifurcation Diagram of the process leading to determination of desirable operating conditions. Further, this multi-input and multi-output model makes the process accessible for control using open-loop/closed-loop approaches or bifurcation control etc. All these studies have been carried out using a low dimensional discrete chaotic system of Hénon Map as a representative of some real-world processes
An Introduction to Quaternion-Valued Recurrent Projection Neural Networks
Hypercomplex-valued neural networks, including quaternion-valued neural
networks, can treat multi-dimensional data as a single entity. In this paper,
we introduce the quaternion-valued recurrent projection neural networks
(QRPNNs). Briefly, QRPNNs are obtained by combining the non-local projection
learning with the quaternion-valued recurrent correlation neural network
(QRCNNs). We show that QRPNNs overcome the cross-talk problem of QRCNNs. Thus,
they are appropriate to implement associative memories. Furthermore,
computational experiments reveal that QRPNNs exhibit greater storage capacity
and noise tolerance than their corresponding QRCNNs.Comment: Accepted to be Published in: Proceedings of the 8th Brazilian
Conference on Intelligent Systems (BRACIS 2019), October 15-18, 2019,
Salvador, BA, Brazi
Generative Image Modeling Using Spatial LSTMs
Modeling the distribution of natural images is challenging, partly because of
strong statistical dependencies which can extend over hundreds of pixels.
Recurrent neural networks have been successful in capturing long-range
dependencies in a number of problems but only recently have found their way
into generative image models. We here introduce a recurrent image model based
on multi-dimensional long short-term memory units which are particularly suited
for image modeling due to their spatial structure. Our model scales to images
of arbitrary size and its likelihood is computationally tractable. We find that
it outperforms the state of the art in quantitative comparisons on several
image datasets and produces promising results when used for texture synthesis
and inpainting
A hierarchy of recurrent networks for speech recognition
Generative models for sequential data based on directed graphs of Restricted Boltzmann Machines (RBMs) are able to accurately model high dimensional sequences as recently shown. In these models, temporal dependencies in the input are discovered by either buffering previous visible variables or by recurrent connections of the hidden variables. Here we propose a modification of these models, the Temporal Reservoir Machine (TRM). It utilizes a recurrent artificial neural network (ANN) for integrating information from the input over
time. This information is then fed into a RBM at each time step. To avoid difficulties of recurrent network learning, the ANN remains untrained and hence can be thought of as a random feature extractor. Using the architecture of multi-layer RBMs (Deep Belief Networks), the TRMs can be used as a building block for complex hierarchical models. This approach unifies RBM-based approaches for sequential data modeling and the Echo State Network, a powerful approach for black-box system identification. The TRM is tested on a spoken digits task under noisy conditions, and competitive performances compared to previous models are observed
- …