3 research outputs found

    Optimal quadrature-sparsification for integral operator approximation

    Get PDF
    The design of sparse quadratures for the approximation of integral operators related to symmetric positive-semidefinite kernels is addressed. Particular emphasis is placed on the approximation of the main eigenpairs of an initial operator and on the assessment of the approximation accuracy. Special attention is drawn to the design of sparse quadratures with support included in fixed finite sets of points (that is, quadrature-sparsification), this framework encompassing the approximation of kernel matrices. For a given kernel, the accuracy of a quadrature approximation is assessed through the squared Hilbert--Schmidt norm (for operators acting on the underlying reproducing kernel Hilbert space) of the difference between the integral operators related to the initial and approximate measures; by analogy with the notion of kernel discrepancy, the underlying criterion is referred to as the squared-kernel discrepancy between the two measures. In the quadrature-sparsification framework, sparsity of the approximate quadrature is promoted through the introduction of an â„“1\ell^{1}-type penalization, and the computation of a penalized squared-kernel-discrepancy-optimal approximation then consists in a convex quadratic minimization problem; such quadratic programs can in particular be interpreted as the Lagrange dual formulations of distorted one-class support-vector machines related to the squared kernel. Error bounds on the induced spectral approximations are derived, and the connection between penalization, sparsity, and accuracy of the spectral approximation is investigated. Numerical strategies for solving large-scale penalized squared-kernel-discrepancy minimization problems are discussed, and the efficiency of the approach is illustrated by a series of examples. In particular, the ability of the proposed methodology to lead to accurate approximations of the main eigenpairs of kernel matrices related to large-scale datasets is demonstrated

    Kernel embedding of measures and low-rank approximation of integral operators

    Get PDF
    We describe a natural coisometry from the Hilbert space of all Hilbert-Schmidt operators on a separable reproducing kernel Hilbert space and onto the RKHS associated with the squared-modulus of the reproducing kernel of . Through this coisometry, trace-class integral operators defined by general measures and the reproducing kernel of are isometrically represented as potentials in , and the quadrature approximation of these operators is equivalent to the approximation of integral functionals on . We then discuss the extent to which the approximation of potentials in RKHSs with squared-modulus kernels can be regarded as a differentiable surrogate for the characterisation of low-rank approximation of integral operators

    OPTIMAL QUADRATURE-SPARSIFICATION FOR INTEGRAL OPERATOR APPROXIMATION

    No full text
    © 2018 Society for Industrial and Applied Mathematics. The design of sparse quadratures for the approximation of integral operators related to symmetric positive-semidefinite kernels is addressed. Particular emphasis is placed on the approximation of the main eigenpairs of an initial operator and on the assessment of the approximation accuracy. Special attention is drawn to the design of sparse quadratures with support included in fixed finite sets of points (that is, quadrature-sparsification), this framework encompassing the approximation of kernel matrices. For a given kernel, the accuracy of a quadrature approximation is assessed through the squared Hilbert-Schmidt norm (for operators acting on the underlying reproducing kernel Hilbert space) of the difference between the integral operators related to the initial and approximate measures; by analogy with the notion of kernel discrepancy, the underlying criterion is referred to as the squared-kernel discrepancy between the two measures. In the quadrature-sparsification framework, sparsity of the approximate quadrature is promoted through the introduction of an ℓ1-type penalization, and the computation of a penalized squared-kernel-discrepancy-optimal approximation then consists in a convex quadratic minimization problem; such quadratic programs can in particular be interpreted as the Lagrange dual formulations of distorted one-class support-vector machines related to the squared kernel. Error bounds on the induced spectral approximations are derived, and the connection between penalization, sparsity, and accuracy of the spectral approximation is investigated. Numerical strategies for solving large-scale penalized squared-kernel-discrepancy minimization problems are discussed, and the efficiency of the approach is illustrated by a series of examples. In particular, the ability of the proposed methodology to lead to accurate approximations of the main eigenpairs of kernel matrices related to large-scale datasets is demonstrated.status: publishe
    corecore