157,565 research outputs found

    Research on an online self-organizing radial basis function neural network

    Get PDF
    A new growing and pruning algorithm is proposed for radial basis function (RBF) neural network structure design in this paper, which is named as self-organizing RBF (SORBF). The structure of the RBF neural network is introduced in this paper first, and then the growing and pruning algorithm is used to design the structure of the RBF neural network automatically. The growing and pruning approach is based on the radius of the receptive field of the RBF nodes. Meanwhile, the parameters adjusting algorithms are proposed for the whole RBF neural network. The performance of the proposed method is evaluated through functions approximation and dynamic system identification. Then, the method is used to capture the biochemical oxygen demand (BOD) concentration in a wastewater treatment system. Experimental results show that the proposed method is efficient for network structure optimization, and it achieves better performance than some of the existing algorithms

    Function Approximation with Randomly Initialized Neural Networks for Approximate Model Reference Adaptive Control

    Full text link
    Classical results in neural network approximation theory show how arbitrary continuous functions can be approximated by networks with a single hidden layer, under mild assumptions on the activation function. However, the classical theory does not give a constructive means to generate the network parameters that achieve a desired accuracy. Recent results have demonstrated that for specialized activation functions, such as ReLUs and some classes of analytic functions, high accuracy can be achieved via linear combinations of randomly initialized activations. These recent works utilize specialized integral representations of target functions that depend on the specific activation functions used. This paper defines mollified integral representations, which provide a means to form integral representations of target functions using activations for which no direct integral representation is currently known. The new construction enables approximation guarantees for randomly initialized networks for a variety of widely used activation functions

    Multi-level Neural Networks for Accurate Solutions of Boundary-Value Problems

    Full text link
    The solution to partial differential equations using deep learning approaches has shown promising results for several classes of initial and boundary-value problems. However, their ability to surpass, particularly in terms of accuracy, classical discretization methods such as the finite element methods, remains a significant challenge. Deep learning methods usually struggle to reliably decrease the error in their approximate solution. A new methodology to better control the error for deep learning methods is presented here. The main idea consists in computing an initial approximation to the problem using a simple neural network and in estimating, in an iterative manner, a correction by solving the problem for the residual error with a new network of increasing complexity. This sequential reduction of the residual of the partial differential equation allows one to decrease the solution error, which, in some cases, can be reduced to machine precision. The underlying explanation is that the method is able to capture at each level smaller scales of the solution using a new network. Numerical examples in 1D and 2D are presented to demonstrate the effectiveness of the proposed approach. This approach applies not only to physics informed neural networks but to other neural network solvers based on weak or strong formulations of the residual.Comment: 34 pages, 20 figure

    Spiking Neural Network Data Reduction via Interval Arithmetic

    Get PDF
    Approximate Computing (AxC) allows reducing the accuracy required by the user and the precision provided by the computing system to optimize the whole system in terms of performance, energy, and area reduction. Spiking Neural Networks(SNNs) are the new frontier for artificial intelligence because they better represent the timing influence on decision making, and also allow for a more reliable hardware design. Unfortunately, this design requires some area minimization strategies when the target hardware reaches the edge of computing. This seminal work introduces modeling of the approximation for data storage that supports an SNN via Interval Arithmetic (IA) by extracting the computation graph of the SNN and then resorting to IA to quickly evaluate the impact of approximation in terms of loss inaccuracy without executing the network each time. Experimental results comparing our model to the real network confirm the quality of the approach

    Optimal approximation of piecewise smooth functions using deep ReLU neural networks

    Full text link
    We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in L2L^2. As a model class, we consider the set Eβ(Rd)\mathcal{E}^\beta (\mathbb R^d) of possibly discontinuous piecewise CβC^\beta functions f:[1/2,1/2]dRf : [-1/2, 1/2]^d \to \mathbb R, where the different smooth regions of ff are separated by CβC^\beta hypersurfaces. For dimension d2d \geq 2, regularity β>0\beta > 0, and accuracy ε>0\varepsilon > 0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd)\mathcal{E}^\beta(\mathbb R^d) up to L2L^2 error of ε\varepsilon. The constructed networks have a fixed number of layers, depending only on dd and β\beta, and they have O(ε2(d1)/β)O(\varepsilon^{-2(d-1)/\beta}) many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve the optimal approximation rate, one needs ReLU networks of a certain depth. Precisely, for piecewise Cβ(Rd)C^\beta(\mathbb R^d) functions, this minimal depth is given---up to a multiplicative constant---by β/d\beta/d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function ff to be approximated can be factorized into a smooth dimension reducing feature map τ\tau and classifier function gg---defined on a low-dimensional feature space---as f=gτf = g \circ \tau. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.Comment: Generalized some estimates to LpL^p norms for $0<p<\infty
    corecore