14,176 research outputs found
Product Reservoir Computing: Time-Series Computation with Multiplicative Neurons
Echo state networks (ESN), a type of reservoir computing (RC) architecture,
are efficient and accurate artificial neural systems for time series processing
and learning. An ESN consists of a core of recurrent neural networks, called a
reservoir, with a small number of tunable parameters to generate a
high-dimensional representation of an input, and a readout layer which is
easily trained using regression to produce a desired output from the reservoir
states. Certain computational tasks involve real-time calculation of high-order
time correlations, which requires nonlinear transformation either in the
reservoir or the readout layer. Traditional ESN employs a reservoir with
sigmoid or tanh function neurons. In contrast, some types of biological neurons
obey response curves that can be described as a product unit rather than a sum
and threshold. Inspired by this class of neurons, we introduce a RC
architecture with a reservoir of product nodes for time series computation. We
find that the product RC shows many properties of standard ESN such as
short-term memory and nonlinear capacity. On standard benchmarks for chaotic
prediction tasks, the product RC maintains the performance of a standard
nonlinear ESN while being more amenable to mathematical analysis. Our study
provides evidence that such networks are powerful in highly nonlinear tasks
owing to high-order statistics generated by the recurrent product node
reservoir
Combining a recurrent neural network and a PID controller for prognostic purpose.
International audienceIn maintenance field, prognostic is recognized as a key feature as the prediction of the remaining useful life of a system allows avoiding inopportune maintenance spending. Assuming that it can be difficult to provide models for that purpose, artificial neural networks appear to be well suited. In this paper, an approach combining a Recurrent Radial Basis Function network (RRBF) and a proportional integral derivative controller (PID) is proposed in order to improve the accuracy of predictions. The PID controller attempts to correct the error between the real process variable and the neural network predictions. The approach and its performances are illustrated by using two classical prediction benchmarks: the Mackey–Glass chaotic time series and the Box–Jenkins furnace data
Impact of noise on a dynamical system: prediction and uncertainties from a swarm-optimized neural network
In this study, an artificial neural network (ANN) based on particle swarm
optimization (PSO) was developed for the time series prediction. The hybrid
ANN+PSO algorithm was applied on Mackey--Glass chaotic time series in the
short-term . The performance prediction was evaluated and compared with
another studies available in the literature. Also, we presented properties of
the dynamical system via the study of chaotic behaviour obtained from the
predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with
a Gaussian stochastic procedure (called {\it stochastic} hybrid ANN+PSO) in
order to obtain a new estimator of the predictions, which also allowed us to
compute uncertainties of predictions for noisy Mackey--Glass chaotic time
series. Thus, we studied the impact of noise for several cases with a white
noise level () from 0.01 to 0.1.Comment: 11 pages, 8 figure
Function approximation in high-dimensional spaces using lower-dimensional Gaussian RBF networks.
by Jones Chui.Thesis (M.Phil.)--Chinese University of Hong Kong, 1992.Includes bibliographical references (leaves 62-[66]).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Fundamentals of Artificial Neural Networks --- p.2Chapter 1.1.1 --- Processing Unit --- p.2Chapter 1.1.2 --- Topology --- p.3Chapter 1.1.3 --- Learning Rules --- p.4Chapter 1.2 --- Overview of Various Neural Network Models --- p.6Chapter 1.3 --- Introduction to the Radial Basis Function Networks (RBFs) --- p.8Chapter 1.3.1 --- Historical Development --- p.9Chapter 1.3.2 --- Some Intrinsic Problems --- p.9Chapter 1.4 --- Objective of the Thesis --- p.10Chapter 2 --- Low-dimensional Gaussian RBF networks (LowD RBFs) --- p.13Chapter 2.1 --- Architecture of LowD RBF Networks --- p.13Chapter 2.1.1 --- Network Structure --- p.13Chapter 2.1.2 --- Learning Rules --- p.17Chapter 2.2 --- Construction of LowD RBF Networks --- p.19Chapter 2.2.1 --- Growing Heuristic --- p.19Chapter 2.2.2 --- Pruning Heuristic --- p.27Chapter 2.2.3 --- Summary --- p.31Chapter 3 --- Application examples --- p.34Chapter 3.1 --- Chaotic Time Series Prediction --- p.35Chapter 3.1.1 --- Performance Comparison --- p.39Chapter 3.1.2 --- Sensitivity Analysis of MSE THRESHOLDS --- p.41Chapter 3.1.3 --- Effects of Increased Embedding Dimension --- p.41Chapter 3.1.4 --- Comparison with Tree-Structured Network --- p.46Chapter 3.1.5 --- Overfitting Problem --- p.46Chapter 3.2 --- Nonlinear prediction of speech signal --- p.49Chapter 3.2.1 --- Comparison with Linear Predictive Coding (LPC) --- p.54Chapter 3.2.2 --- Performance Test in Noisy Conditions --- p.55Chapter 3.2.3 --- Iterated Prediction of Speech --- p.59Chapter 4 --- Conclusion --- p.60Chapter 4.1 --- Discussions --- p.60Chapter 4.2 --- Limitations and Suggestions for Further Research --- p.61Bibliography --- p.6
Multilayered feed forward Artificial Neural Network model to predict the average summer-monsoon rainfall in India
In the present research, possibility of predicting average summer-monsoon
rainfall over India has been analyzed through Artificial Neural Network models.
In formulating the Artificial Neural Network based predictive model, three
layered networks have been constructed with sigmoid non-linearity. The models
under study are different in the number of hidden neurons. After a thorough
training and test procedure, neural net with three nodes in the hidden layer is
found to be the best predictive model.Comment: 19 pages, 1 table, 3 figure
Learning and predicting time series by neural networks
Artificial neural networks which are trained on a time series are supposed to
achieve two abilities: firstly to predict the series many time steps ahead and
secondly to learn the rule which has produced the series. It is shown that
prediction and learning are not necessarily related to each other. Chaotic
sequences can be learned but not predicted while quasiperiodic sequences can be
well predicted but not learned.Comment: 5 page
- …