4 research outputs found

    Importance sampling strategy for non-convex randomized block-coordinate descent

    Get PDF
    As the number of samples and dimensionality of optimization problems related to statistics an machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones. Coordinates to be optimized are usually selected randomly according to a given probability distribution. We introduce an importance sampling strategy that helps randomized coordinate descent algorithms to focus on blocks that are still far from convergence. The framework applies to problems composed of the sum of two possibly non-convex terms, one being separable and non-smooth. We have compared our algorithm to a full gradient proximal approach as well as to a randomized block coordinate algorithm that considers uniform sampling and cyclic block coordinate descent. Experimental evidences show the clear benefit of using an importance sampling strategy

    DC Proximal Newton for Non-Convex Optimization Problems

    Get PDF
    We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure sufficient descent. A theoretical analysis is provided showing that the iterates of the proposed algorithm {admit} as limit points stationary points of the DC objective function. Numerical experiments show that our approach is more efficient than current state of the art for a problem with a convex loss functions and non-convex regularizer. We have also illustrated the benefit of our algorithm in high-dimensional transductive learning problem where both loss function and regularizers are non-convex

    Active set strategy for high-dimensional non-convex sparse optimization problems

    Get PDF
    International audienceThe use of non-convex sparse regularization has attracted much interest when estimating a very sparse model on high dimensional data. In this work we express the optimality conditions of the optimization problem for a large class of non-convex regularizers. From those conditions, we derive an efficient active set strategy that avoids the computing of unnecessary gradients. Numerical experiments on both generated and real life datasets show a clear gain in computational cost w.r.t. the state of the art when using our method to obtain very sparse solutions.L'utilisation de régularisations non-convexes a attiré beaucoup d'attention pour l'estimation de modèles parcimonieux en grandes dimensions. Dans ce travail, nous exprimons les conditions d'optimalité du problème d'optimisation correspondant pour une large classe de régularisations non convexes. Nous développons un stratégie de type "ensemble actif" efficace à partir de ces conditions, évitant ainsi des calculs de gradients inutiles. Une étude numérique sur données générées et sur données réelles montrent clairement l'apport en temps de calcul de notre méthode par rapport à celles de l'état de l'art pour obtenir des solutions très parcimonieuses
    corecore