54,619 research outputs found

    Networks and the Best Approximation Property

    Get PDF
    Networks can be considered as approximation schemes. Multilayer networks of the backpropagation type can approximate arbitrarily well continuous functions (Cybenko, 1989; Funahashi, 1989; Stinchcombe and White, 1989). We prove that networks derived from regularization theory and including Radial Basis Function (Poggio and Girosi, 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property of best approximation. The main result of this paper is that multilayer networks, of the type used in backpropagation, are not best approximation. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation

    A Connection Between GRBF and MLP

    Get PDF
    Both multilayer perceptrons (MLP) and Generalized Radial Basis Functions (GRBF) have good approximation properties, theoretically and experimentally. Are they related? The main point of this paper is to show that for normalized inputs, multilayer perceptron networks are radial function networks (albeit with a non-standard radial function). This provides an interpretation of the weights w as centers t of the radial function network, and therefore as equivalent to templates. This insight may be useful for practical applications, including better initialization procedures for MLP. In the remainder of the paper, we discuss the relation between the radial functions that correspond to the sigmoid for normalized inputs and well-behaved radial basis functions, such as the Gaussian. In particular, we observe that the radial function associated with the sigmoid is an activation function that is good approximation to Gaussian basis functions for a range of values of the bias parameter. The implication is that a MLP network can always simulate a Gaussian GRBF network (with the same number of units but less parameters); the converse is true only for certain values of the bias parameter. Numerical experiments indicate that this constraint is not always satisfied in practice by MLP networks trained with backpropagation. Multiscale GRBF networks, on the other hand, can approximate MLP networks with a similar number of parameters

    Function Approximation Using Wavelet And Radial Basis Function Networks

    Get PDF
    Rangkaian Wavelet telah diperkenalkan sebagai proses suap depan bagi rangkaian neural yang disokong oleh teori wavelet. Rangkaian neural ini dapat digunakan secara langsung dalam penghampiran fungsi. Dalam disertasi ini, Rangkaian Wavelet dibuktikan sebagai salah satu sub-bahagian dalam kumpulan keturunan di mana rangkaian neural ini mempunyai sifat yang sama dengan kumpulan yang di namakan Fungsi Asas Radial Berpemberat. Hal ini juga berlaku bagi rangkaian neural yang mempunyai paradigma yang berlainan. Disertasi ini juga merangkumi pengkajian dalam Fungsi Asas Radial berperingkat 2.Fungsi ini juga dikenali sebagai Fungsi Asas Radial Piawai kerana mempunyai persamaan dimana fungsi ini akan bertindak sebagai Fungsi Asas Radial Piawai apabila fungsi exponent mempunyai sifat yang sarna dengan fungsi pengaktifan Gaussian apabila peringkat bagi eksponen n =2. The Wavelet Neural Network has been introduced as a special feedforward neural network supported by the wavelet theory. Such network can be directly used in function approximation problems. In this dissertation, wavelet networks are proven to be as well as many other neural paradigms, a specific case of generic paradigm named Weighted Radial Basis Functions Network. In this dissertation we will also investigate the WRBF- 2. WRBF-2 is standard RBF since the exponential function behaves as a Gaussian, due to the exponent n = 2

    Solving high-order partial differential equations with indirect radial basis function networks

    Get PDF
    This paper reports a new numerical method based on radial basis function networks (RBFNs) for solving high-order partial differential equations (PDEs). The variables and their derivatives in the governing equations are represented by integrated RBFNs. The use of integration in constructing neural networks allows the straightforward implementation of multiple boundary conditions and the accurate approximation of high-order derivatives. The proposed RBFN method is verified successfully through the solution of thin-plate bending and viscous flow problems which are governed by biharmonic equations. For thermally driven cavity flows, the solutions are obtained up to a high Rayleigh number

    The effect of several parameters on radial basis function networks for time series prediction

    Get PDF
    In this study, several radial basis function networks are compared according to their approximation ability in time series forecasting problems. Optimal values for the tested parameters are obtained using computer simulation runs. Effects of width selection in Gaussian Kemels, of the number of neurons in the hidden layer, and of selection of Kemel function are investigated

    Priors Stabilizers and Basis Functions: From Regularization to Radial, Tensor and Additive Splines

    Get PDF
    We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type

    THE EFFECTIVENESS OF THE LEARNING ALGORITHM OF RADIAL BASIS NETWORKS WITH RELATION TO THE TRANSFER FUNCTIONS APPLIED ON THE EXAMPLE OF MAPPING OF THE LIE LAND OF ZIELONA GORA CITY

    Get PDF
    The article presents problems connected to the use of radial basis networks for the approximation of the ground surface. The main goal of this paper is to research into the precision of topographic profile representation with relation to the transfer functions applied. The paper contains a description of the structure of a radial basis network and a description of networks learning by means of the hybrid method with the use of the notion of the Green matrix pseudoinverse. Special attention was given to the problem of a choice of transfer functions: the Gauss function, the exponential function, the Hardy function, the spliced function of the third and fourth degree as well as bicentral functions with an independent slope and rotation. the result of this article is an example of the operation of a network with relation the transfer functions under discussion
    corecore