114,926 research outputs found

    Representation and statistical properties of deep neural networks on structured data

    Get PDF
    Significant success of deep learning has brought unprecedented challenges to conventional wisdom in statistics, optimization, and applied mathematics. In many high-dimensional applications, e.g., image data of hundreds of thousands of pixels, deep learning is remarkably scalable and mysteriously generalizes well. Although such appealing behavior stimulates wide applications, a fundamental theoretical challenge -- curse of data dimensionality -- naturally arises. Roughly put, the sample complexity in practical applications is significantly smaller than that predicted by theory. It is a common belief that deep neural networks are good at learning various geometric structures hidden in data sets. However, little theory has been established to explain such a power. This thesis aims to bridge the gap between theory and practice by studying function approximation and statistical theories of deep neural networks in exploitation of geometric structures in data. -- Function Approximation Theories on Low-dimensional Manifolds using Deep Neural Networks. We first develop an efficient universal approximation theory functions on a low-dimensional Riemannian manifold. A feedforward network architecture is constructed for function approximation, where the size of the network grows depending on the manifold dimension. Furthermore, we prove efficient approximation theory for convolutional residual networks in approximating Besov functions. Lastly, we demonstrate the benefit of overparameterized neural networks in function approximation. Specifically, we show that large neural networks are capable of accurately approximating a target function, and the network itself enjoys Lipschitz continuity. -- Statistical Theories on Low-dimensional Data using Deep Neural Networks. Efficient approximation theories of neural networks provide valuable guidelines to properly choose network architectures, when data exhibit geometric structures. In combination with statistical tools, we prove that neural networks can circumvent the curse of data dimensionality and enjoy fast statistical convergence in various learning problems, including nonparametric regression/classification, generative distribution estimation, and doubly-robust policy learning.Ph.D

    Approximation and Non-parametric Estimation of ResNet-type Convolutional Neural Networks

    Full text link
    Convolutional neural networks (CNNs) have been shown to achieve optimal approximation and estimation error rates (in minimax sense) in several function classes. However, previous analyzed optimal CNNs are unrealistically wide and difficult to obtain via optimization due to sparse constraints in important function classes, including the H\"older class. We show a ResNet-type CNN can attain the minimax optimal error rates in these classes in more plausible situations -- it can be dense, and its width, channel size, and filter size are constant with respect to sample size. The key idea is that we can replicate the learning ability of Fully-connected neural networks (FNNs) by tailored CNNs, as long as the FNNs have \textit{block-sparse} structures. Our theory is general in a sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs. As an application, we derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H\"older classes with the same strategy.Comment: 8 pages + References 2 pages + Supplemental material 18 page

    Approximation of Continuous Functions by Artificial Neural Networks

    Get PDF
    An artificial neural network is a biologically-inspired system that can be trained to perform computations. Recently, techniques from machine learning have trained neural networks to perform a variety of tasks. It can be shown that any continuous function can be approximated by an artificial neural network with arbitrary precision. This is known as the universal approximation theorem. In this thesis, we will introduce neural networks and one of the first versions of this theorem, due to Cybenko. He modeled artificial neural networks using sigmoidal functions and used tools from measure theory and functional analysis
    • …
    corecore