5 research outputs found

    Learning a Sparse Representation of Barron Functions with the Inverse Scale Space Flow

    Get PDF
    This paper presents a method for finding a sparse representation of Barron functions. Specifically, given an L2L^2 function ff, the inverse scale space flow is used to find a sparse measure μ\mu minimising the L2L^2 loss between the Barron function associated to the measure μ\mu and the function ff. The convergence properties of this method are analysed in an ideal setting and in the cases of measurement noise and sampling bias. In an ideal setting the objective decreases strictly monotone in time to a minimizer with O(1/t)\mathcal{O}(1/t), and in the case of measurement noise or sampling bias the optimum is achieved up to a multiplicative or additive constant. This convergence is preserved on discretization of the parameter space, and the minimizers on increasingly fine discretizations converge to the optimum on the full parameter space

    Duality for Neural Networks through Reproducing Kernel Banach Spaces

    Get PDF
    Reproducing Kernel Hilbert spaces (RKHS) have been a very successful tool in various areas of machine learning. Recently, Barron spaces have been used to prove bounds on the generalisation error for neural networks. Unfortunately, Barron spaces cannot be understood in terms of RKHS due to the strong nonlinear coupling of the weights. This can be solved by using the more general Reproducing Kernel Banach spaces (RKBS). We show that these Barron spaces belong to a class of integral RKBS. This class can also be understood as an infinite union of RKHS spaces. Furthermore, we show that the dual space of such RKBSs, is again an RKBS where the roles of the data and parameters are interchanged, forming an adjoint pair of RKBSs including a reproducing kernel. This allows us to construct the saddle point problem for neural networks, which can be used in the whole field of primal-dual optimisation

    Learning a Sparse Representation of Barron Functions with the Inverse Scale Space Flow

    Full text link
    This paper presents a method for finding a sparse representation of Barron functions. Specifically, given an L2L^2 function ff, the inverse scale space flow is used to find a sparse measure μ\mu minimising the L2L^2 loss between the Barron function associated to the measure μ\mu and the function ff. The convergence properties of this method are analysed in an ideal setting and in the cases of measurement noise and sampling bias. In an ideal setting the objective decreases strictly monotone in time to a minimizer with O(1/t)\mathcal{O}(1/t), and in the case of measurement noise or sampling bias the optimum is achieved up to a multiplicative or additive constant. This convergence is preserved on discretization of the parameter space, and the minimizers on increasingly fine discretizations converge to the optimum on the full parameter space.Comment: 30 pages, 0 figure

    Duality for Neural Networks through Reproducing Kernel Banach Spaces

    No full text
    Reproducing Kernel Hilbert spaces (RKHS) have been a very successful tool in various areas of machine learning. Recently, Barron spaces have been used to prove bounds on the generalisation error for neural networks. Unfortunately, Barron spaces cannot be understood in terms of RKHS due to the strong nonlinear coupling of the weights. We show that this can be solved by using the more general Reproducing Kernel Banach spaces (RKBS). This class of integral RKBS can be understood as an infinite union of RKHS spaces. As the RKBS is not a Hilbert space, it is not its own dual space. However, we show that its dual space is again an RKBS where the roles of the data and parameters are interchanged, forming an adjoint pair of RKBSs including a reproducing property in the dual space. This allows us to construct the saddle point problem for neural networks, which can be used in the whole field of primal-dual optimisation
    corecore