144,797 research outputs found

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Computation of transient viscous flows using indirect radial basis function networks

    Get PDF
    In this paper, an indirect/integrated radial-basis-function network (IRBFN) method is further developed to solve transient partial differential equations (PDEs) governing fluid flow problems. Spatial derivatives are discretized using one- and two-dimensional IRBFN interpolation schemes, whereas temporal derivatives are approximated using a method of lines and a finite-difference technique. In the case of moving interface problems, the IRBFN method is combined with the level set method to capture the evolution of the interface. The accuracy of the method is investigated by considering several benchmark test problems, including the classical lid-driven cavity flow. Very accurate results are achieved using relatively low numbers of data points

    A new neural network technique for the design of multilayered microwave shielded bandpass filters

    Get PDF
    In this work, we propose a novel technique based on neural networks, for the design of microwave filters in shielded printed technology. The technique uses radial basis function neural networks to represent the non linear relations between the quality factors and coupling coefficients, with the geometrical dimensions of the resonators. The radial basis function neural networks are employed for the first time in the design task of shielded printed filters, and permit a fast and precise operation with only a limited set of training data. Thanks to a new cascade configuration, a set of two neural networks provide the dimensions of the complete filter in a fast and accurate way. To improve the calculation of the geometrical dimensions, the neural networks can take as inputs both electrical parameters and physical dimensions computed by other neural networks. The neural network technique is combined with gradient based optimization methods to further improve the response of the filters. Results are presented to demonstrate the usefulness of the proposed technique for the design of practical microwave printed coupled line and hairpin filters

    Solving high-order partial differential equations with indirect radial basis function networks

    Get PDF
    This paper reports a new numerical method based on radial basis function networks (RBFNs) for solving high-order partial differential equations (PDEs). The variables and their derivatives in the governing equations are represented by integrated RBFNs. The use of integration in constructing neural networks allows the straightforward implementation of multiple boundary conditions and the accurate approximation of high-order derivatives. The proposed RBFN method is verified successfully through the solution of thin-plate bending and viscous flow problems which are governed by biharmonic equations. For thermally driven cavity flows, the solutions are obtained up to a high Rayleigh number

    Reinforcement Learning using Augmented Neural Networks

    Full text link
    Neural networks allow Q-learning reinforcement learning agents such as deep Q-networks (DQN) to approximate complex mappings from state spaces to value functions. However, this also brings drawbacks when compared to other function approximators such as tile coding or their generalisations, radial basis functions (RBF) because they introduce instability due to the side effect of globalised updates present in neural networks. This instability does not even vanish in neural networks that do not have any hidden layers. In this paper, we show that simple modifications to the structure of the neural network can improve stability of DQN learning when a multi-layer perceptron is used for function approximation.Comment: 7 pages; two columns; 4 figure
    corecore