9,339 research outputs found

    Representation of Functional Data in Neural Networks

    Get PDF
    Functional Data Analysis (FDA) is an extension of traditional data analysis to functional data, for example spectra, temporal series, spatio-temporal images, gesture recognition data, etc. Functional data are rarely known in practice; usually a regular or irregular sampling is known. For this reason, some processing is needed in order to benefit from the smooth character of functional data in the analysis methods. This paper shows how to extend the Radial-Basis Function Networks (RBFN) and Multi-Layer Perceptron (MLP) models to functional data inputs, in particular when the latter are known through lists of input-output pairs. Various possibilities for functional processing are discussed, including the projection on smooth bases, Functional Principal Component Analysis, functional centering and reduction, and the use of differential operators. It is shown how to incorporate these functional processing into the RBFN and MLP models. The functional approach is illustrated on a benchmark of spectrometric data analysis.Comment: Also available online from: http://www.sciencedirect.com/science/journal/0925231

    Differentiable approximation by means of the Radon transformation and its applications to neural networks

    Get PDF
    AbstractWe treat the problem of simultaneously approximating a several-times differentiable function in several variables and its derivatives by a superposition of a function, say g, in one variable. In our theory, the domain of approximation can be either compact subsets or the whole Euclidean space Rd. We prove that if the domain is compact, the function g can be used without scaling, and that even in the case where the domain of approximation is the whole space Rd, g can be used without scaling if it satisfies a certain condition. Moreover, g can be chosen from a wide class of functions. The basic tool is the inverse Radon transform. As a neural network can output a superposition of g, our results extend well-known neural approximation theorems which are useful in neural computation theory

    Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

    Get PDF
    The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage
    • …
    corecore