44,812 research outputs found
A Dynamic Parameter Tuning Algorithm For Rbf Neural Networks
The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation. The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation. The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation
Approximation with Random Bases: Pro et Contra
In this work we discuss the problem of selecting suitable approximators from
families of parameterized elementary functions that are known to be dense in a
Hilbert space of functions. We consider and analyze published procedures, both
randomized and deterministic, for selecting elements from these families that
have been shown to ensure the rate of convergence in norm of order
, where is the number of elements. We show that both randomized and
deterministic procedures are successful if additional information about the
families of functions to be approximated is provided. In the absence of such
additional information one may observe exponential growth of the number of
terms needed to approximate the function and/or extreme sensitivity of the
outcome of the approximation to parameters. Implications of our analysis for
applications of neural networks in modeling and control are illustrated with
examples.Comment: arXiv admin note: text overlap with arXiv:0905.067
Greedy Shallow Networks: An Approach for Constructing and Training Neural Networks
We present a greedy-based approach to construct an efficient single hidden
layer neural network with the ReLU activation that approximates a target
function. In our approach we obtain a shallow network by utilizing a greedy
algorithm with the prescribed dictionary provided by the available training
data and a set of possible inner weights. To facilitate the greedy selection
process we employ an integral representation of the network, based on the
ridgelet transform, that significantly reduces the cardinality of the
dictionary and hence promotes feasibility of the greedy selection. Our approach
allows for the construction of efficient architectures which can be treated
either as improved initializations to be used in place of random-based
alternatives, or as fully-trained networks in certain cases, thus potentially
nullifying the need for backpropagation training. Numerical experiments
demonstrate the tenability of the proposed concept and its advantages compared
to the conventional techniques for selecting architectures and initializations
for neural networks
Forecasting the geomagnetic activity of the Dst Index using radial basis function networks
The Dst index is a key parameter which characterises the disturbance of the geomagnetic field in magnetic storms. Modelling of the Dst index is thus very important for the analysis of the geomagnetic field. A data-based modelling approach, aimed at obtaining efficient models based on limited input-output observational data, provides a powerful tool for analysing and forecasting geomagnetic activities including the prediction of the Dst index. Radial basis function (RBF) networks are an important and popular network model for nonlinear system identification and dynamical modelling. A novel generalised multiscale RBF (MSRBF) network is introduced for Dst index modelling. The proposed MSRBF network can easily be converted into a linear-in-the-parameters form and the training of the linear network model can easily be implemented using an orthogonal least squares (OLS) type algorithm. One advantage of the new MSRBF network, compared with traditional single scale RBF networks, is that the new network is more flexible for describing complex nonlinear dynamical systems
Lattice dynamical wavelet neural networks implemented using particle swarm optimisation for spatio-temporal system identification
Starting from the basic concept of coupled map lattices, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNN), is introduced for spatiotemporal system identification, by combining an efficient wavelet representation with a coupled map lattice model. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimisation (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the orthogonal projection pursuit algorithm, significant wavelet-neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated waveletneurons are optimised using a particle swarm optimiser. The resultant network model, obtained in the first stage, may however be redundant. In the second stage, an orthogonal least squares (OLS) algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet-neurons from the network. The proposed two-stage hybrid training procedure can generally produce a parsimonious network model, where a ranked list of wavelet-neurons, according to the capability of each neuron to represent the total variance in the system output signal is produced. Two spatio-temporal system identification examples are presented to demonstrate the performance of the proposed new modelling framework
- …