1,358 research outputs found
Radial Basis Function Neural Networks : A Review
Radial Basis Function neural networks (RBFNNs) represent an attractive alternative to other neural network models. One reason is that they form a unifying link between function approximation, regularization, noisy interpolation, classification and density estimation. It is also the case that training RBF neural networks is faster than training multi-layer perceptron networks. RBFNN learning is usually split into an unsupervised part, where center and widths of the Gaussian basis functions are set, and a linear supervised part for weight computation. This paper reviews various learning methods for determining centers, widths, and synaptic weights of RBFNN. In addition, we will point to some applications of RBFNN in various fields. In the end, we name software that can be used for implementing RBFNNs
The geometry of nonlinear least squares with applications to sloppy models and optimization
Parameter estimation by nonlinear least squares minimization is a common
problem with an elegant geometric interpretation: the possible parameter values
of a model induce a manifold in the space of data predictions. The minimization
problem is then to find the point on the manifold closest to the data. We show
that the model manifolds of a large class of models, known as sloppy models,
have many universal features; they are characterized by a geometric series of
widths, extrinsic curvatures, and parameter-effects curvatures. A number of
common difficulties in optimizing least squares problems are due to this common
structure. First, algorithms tend to run into the boundaries of the model
manifold, causing parameters to diverge or become unphysical. We introduce the
model graph as an extension of the model manifold to remedy this problem. We
argue that appropriate priors can remove the boundaries and improve convergence
rates. We show that typical fits will have many evaporated parameters. Second,
bare model parameters are usually ill-suited to describing model behavior; cost
contours in parameter space tend to form hierarchies of plateaus and canyons.
Geometrically, we understand this inconvenient parametrization as an extremely
skewed coordinate basis and show that it induces a large parameter-effects
curvature on the manifold. Using coordinates based on geodesic motion, these
narrow canyons are transformed in many cases into a single quadratic, isotropic
basin. We interpret the modified Gauss-Newton and Levenberg-Marquardt fitting
algorithms as an Euler approximation to geodesic motion in these natural
coordinates on the model manifold and the model graph respectively. By adding a
geodesic acceleration adjustment to these algorithms, we alleviate the
difficulties from parameter-effects curvature, improving both efficiency and
success rates at finding good fits.Comment: 40 pages, 29 Figure
Separable Gaussian Neural Networks: Structure, Analysis, and Function Approximations
The Gaussian-radial-basis function neural network (GRBFNN) has been a popular
choice for interpolation and classification. However, it is computationally
intensive when the dimension of the input vector is high. To address this
issue, we propose a new feedforward network - Separable Gaussian Neural Network
(SGNN) by taking advantage of the separable property of Gaussian functions,
which splits input data into multiple columns and sequentially feeds them into
parallel layers formed by uni-variate Gaussian functions. This structure
reduces the number of neurons from O(N^d) of GRBFNN to O(dN), which
exponentially improves the computational speed of SGNN and makes it scale
linearly as the input dimension increases. In addition, SGNN can preserve the
dominant subspace of the Hessian matrix of GRBFNN in gradient descent training,
leading to a similar level of accuracy to GRBFNN. It is experimentally
demonstrated that SGNN can achieve 100 times speedup with a similar level of
accuracy over GRBFNN on tri-variate function approximations. The SGNN also has
better trainability and is more tuning-friendly than DNNs with RuLU and Sigmoid
functions. For approximating functions with complex geometry, SGNN can lead to
three orders of magnitude more accurate results than a RuLU-DNN with twice the
number of layers and the number of neurons per layer
Application of a radial basis function neural network for diagnosis of diabetes mellitus
In this article an attempt is made to study the applicability
of a general purpose, supervised feed forward
neural network with one hidden layer, namely. Radial
Basis Function (RBF) neural network. It uses relatively
smaller number of locally tuned units and is
adaptive in nature. RBFs are suitable for pattern recognition
and classification. Performance of the RBF neural
network was also compared with the most commonly used
multilayer perceptron network model and the classical
logistic regression. Diabetes database was used for
empirical comparisons and the results show that RBF
network performs better than other models
Neural networks for characterizing magnetic flux leakage signals
http://www.worldcat.org/oclc/3278680
Application of Wilcoxon Norm for increased Outlier Insensitivity in Function Approximation Problems
In system theory, characterization and identification are fundamental problems. When the plant behavior is completely unknown, it may be characterized using certain model and then, its identification may be carried out with some artificial neural networks(ANN) (like multilayer perceptron(MLP) or functional link artificial neural network(FLANN) ) or Radial Basis Functions(RBF) using some learning rules such as the back propagation (BP) algorithm. They offer flexibility, adaptability and versatility, for the use of a variety of approaches to meet a specific goal, depending upon the circumstances and the requirements of the design specifications. The first aim of the present thesis is to provide a framework for the systematic design of adaptation laws for nonlinear system identification and channel equalization. While constructing an artificial neural network or a radial basis function neural network, the designer is often faced with the problem of choosing a network of the right size for the task. Using a smaller neural network decreases the cost of computation and increases generalization ability. However, a network which is too small may never solve the problem, while a larger network might be able to. Transmission bandwidth being one of the most precious resources in digital communication, Communication channels are usually modeled as band-limited linear finite impulse response (FIR) filters with low pass frequency response
A Dynamic Parameter Tuning Algorithm For Rbf Neural Networks
The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation. The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation. The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation
3-D defect profile reconstruction from magnetic flux leakage signatures using wavelet basis function neural networks
The most popular technique for inspecting natural gas pipelines involves the use of magnetic flux leakage (MFL) methods. The measured MFL signal is interpreted to obtain information concerning the structural integrity of the pipe. Defect characterization involves the task of calculating the shape and size of defects based on the information contained in the signal. An accurate estimate of the defect profile allows assessment of the safe operating pressure of the pipe. Artificial neural networks (ANNs) have been employed for characterizing defects in the past. However, conventional neural networks such as radial basis function neural networks are not always suitable for the following reasons: (1) It is difficult to quantify and measure the confidence level associated with the profile estimates. (2) They do not provide adequate control over output accuracy and network complexity trade-off. (3) Optimal center selection schemes typically use an optimization technique such as least-mean-square (LMS) algorithm---a tedious and computationally intensive procedure. These disadvantages can be overcome by employing a wavelet basis function (WBF) neural network. Such networks allow multiple scales of approximation. For the specific application on hand, Gaussian radial basis functions and Mexican hat wavelet frames are used as scaling functions and wavelets respectively. The proposed basis function centers are calculated using a dyadic expansion scheme and the k-means clustering algorithm;The validity of the proposed approach is demonstrated by predicting defect profiles from simulation data as well as experimental magnetic flux leakage signals. The results demonstrate that wavelet basis function neural networks can successfully map MFL signatures to three-dimensional defect profiles. The center selection scheme requires minimal effort compared to conventional methods. Also, the accuracy of the output can be controlled by varying the number of network resolutions. It is also shown that the use of a priori information such as estimates of the geometric parameters of the defect helps improve characterization results
- …