788 research outputs found

    Spectral approach for kernel-based interpolation

    Get PDF
    http://afst.cedram.org/afst-bin/fitem?id=AFST_2012_6_21_3_439_0We describe how the resolution of a kernel-based interpolation problem can be associated with a spectral problem. An integral operator is defined from the embedding of the considered Hilbert subspace into an auxiliary Hilbert space of square-integrable functions. We finally obtain a spectral representation of the interpolating elements which allows their approximation by spectral truncation. As an illustration, we show how this approach can be used to enforce boundary conditions in kernel-based interpolation models and in what it offers an interesting alternative for dimension reduction

    Famille et traitement de la toxicomanie chez les adolescents : étude de cas

    Get PDF
    L’importance d’impliquer la famille dans le traitement de réadaptation en toxicomanie chez les adolescents fait consensus, mais la nature de son influence demeure peu documentée. Objectifs. Le but de cette étude de cas est de mieux comprendre la contribution de l’implication familiale dans ce processus de réadaptation. Méthode. Des entrevues individuelles semi-structurées ont été menées auprès de deux jeunes en traitement de la toxicomanie, leurs parents et les cliniciens les ayant suivis. Des données quantitatives complémentaires pré et post-traitement sur la gravité des problèmes familiaux et de consommation auprès de deux jeunes ont été recueillies. Résultats. Les récits de ces différents acteurs soulignent les bienfaits d’une implication parentale constante à travers les différentes étapes du traitement.There is a general consensus as to the importance of involving the family in the rehabilitation treatment of adolescent substance abuse; but there is little documentation as concerns the nature of family influence in this matter. The objective of the present study is to come to a better understanding of how family involvement contributes to the rehabilitation process. The method consists of semi-standardized individual interviews carried out with two youngsters undergoing substance abuse treatment, and with the parents and clinicians monitoring their progress. We have also collected complementary pre- and post-treatment data regarding the seriousness of family difficulties and substance abuse experienced by these two young people. Results. The narratives of these various stakeholders underline the benefits of a constant parental involvement at each stage of the rehabilitation treatment

    Kernel embedding of measures and low-rank approximation of integral operators

    Get PDF
    We describe a natural coisometry from the Hilbert space of all Hilbert-Schmidt operators on a separable reproducing kernel Hilbert space and onto the RKHS associated with the squared-modulus of the reproducing kernel of . Through this coisometry, trace-class integral operators defined by general measures and the reproducing kernel of are isometrically represented as potentials in , and the quadrature approximation of these operators is equivalent to the approximation of integral functionals on . We then discuss the extent to which the approximation of potentials in RKHSs with squared-modulus kernels can be regarded as a differentiable surrogate for the characterisation of low-rank approximation of integral operators

    The Curse of Unrolling: Rate of Differentiating Through Optimization

    Full text link
    Computing the Jacobian of the solution of an optimization problem is a central problem in machine learning, with applications in hyperparameter optimization, meta-learning, optimization as a layer, and dataset distillation, to name a few. Unrolled differentiation is a popular heuristic that approximates the solution using an iterative solver and differentiates it through the computational path. This work provides a non-asymptotic convergence-rate analysis of this approach on quadratic objectives for gradient descent and the Chebyshev method. We show that to ensure convergence of the Jacobian, we can either 1) choose a large learning rate leading to a fast asymptotic convergence but accept that the algorithm may have an arbitrarily long burn-in phase or 2) choose a smaller learning rate leading to an immediate but slower convergence. We refer to this phenomenon as the curse of unrolling. Finally, we discuss open problems relative to this approach, such as deriving a practical update rule for the optimal unrolling strategy and making novel connections with the field of Sobolev orthogonal polynomials

    Beyond L1: Faster and Better Sparse Models with skglm

    Full text link
    We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties. Our algorithm is able to solve problems with millions of samples and features in seconds, by relying on coordinate descent, working sets and Anderson acceleration. It handles previously unaddressed models, and is extensively shown to improve state-of-art algorithms. We provide a flexible, scikit-learn compatible package, which easily handles customized datafits and penalties

    Energy-based sequential sampling for low-rank PSD-matrix approximation

    Get PDF
    We introduce a pseudoconvex differentiable relaxation of the column-sampling problem for the Nyström approximation of positive-semidefinite (PSD) matrices. The relaxation is based on the interpretation of PSD matrices as integral operators and relies on the supports of measures to characterise samples of columns. We describe a class of gradient-based sequential sampling strategies which leverages the properties of the considered framework, and demonstrate its ability to produce accurate Nyström approximations. The time complexity of the stochastic variants of the discussed strategies is linear in the order of the considered PSD matrices, and the underlying computations can be easily parallelised

    Tools for primal degenerate linear programs: IPS, DCA, and PE

    Get PDF
    ABSTRACT: This paper describes three recent tools for dealing with primal degeneracy in linear programming. The first one is the improved primal simplex (IPS) algorithm which turns degeneracy into a possible advantage. The constraints of the original problem are dynamically partitioned based on the numerical values of the current basic variables. The idea is to work only with those constraints that correspond to nondegenerate basic variables. This leads to a row-reduced problem which decreases the size of the current working basis. The main feature of IPS is that it provides a nondegenerate pivot at every iteration of the solution process until optimality is reached. To achieve such a result, a negative reduced cost convex combination of the variables at their bounds is selected, if any. This pricing step provides a necessary and sufficient optimality condition for linear programming. The second tool is the dynamic constraint aggregation (DCA), a constructive strategy specifically designed for set partitioning constraints. It heuristically aims to achieve the properties provided by the IPS methodology. We bridge the similarities and differences of IPS and DCA on set partitioning models. The final tool is the positive edge (PE) rule. It capitalizes on the compatibility definition to determine the status of a column vector and the associated variable during the reduced cost computation. Within IPS, the selection of a compatible variable to enter the basis ensures a nondegenerate pivot, hence PE permits a trade-off between strict improvement and high, reduced cost degenerate pivots. This added value is obtained without explicitly computing the updated column components in the simplex tableau. Ultimately, we establish tight bonds between these three tools by going back to the linear algebra framework from which emanates the so-called concept of subspace basis

    Optimal quadrature-sparsification for integral operator approximation

    Get PDF
    The design of sparse quadratures for the approximation of integral operators related to symmetric positive-semidefinite kernels is addressed. Particular emphasis is placed on the approximation of the main eigenpairs of an initial operator and on the assessment of the approximation accuracy. Special attention is drawn to the design of sparse quadratures with support included in fixed finite sets of points (that is, quadrature-sparsification), this framework encompassing the approximation of kernel matrices. For a given kernel, the accuracy of a quadrature approximation is assessed through the squared Hilbert--Schmidt norm (for operators acting on the underlying reproducing kernel Hilbert space) of the difference between the integral operators related to the initial and approximate measures; by analogy with the notion of kernel discrepancy, the underlying criterion is referred to as the squared-kernel discrepancy between the two measures. In the quadrature-sparsification framework, sparsity of the approximate quadrature is promoted through the introduction of an â„“1\ell^{1}-type penalization, and the computation of a penalized squared-kernel-discrepancy-optimal approximation then consists in a convex quadratic minimization problem; such quadratic programs can in particular be interpreted as the Lagrange dual formulations of distorted one-class support-vector machines related to the squared kernel. Error bounds on the induced spectral approximations are derived, and the connection between penalization, sparsity, and accuracy of the spectral approximation is investigated. Numerical strategies for solving large-scale penalized squared-kernel-discrepancy minimization problems are discussed, and the efficiency of the approach is illustrated by a series of examples. In particular, the ability of the proposed methodology to lead to accurate approximations of the main eigenpairs of kernel matrices related to large-scale datasets is demonstrated
    • …
    corecore