1,095 research outputs found

    On the Triality Theory for a Quartic Polynomial Optimization Problem

    Full text link
    This paper presents a detailed proof of the triality theorem for a class of fourth-order polynomial optimization problems. The method is based on linear algebra but it solves an open problem on the double-min duality left in 2003. Results show that the triality theory holds strongly in a tri-duality form if the primal problem and its canonical dual have the same dimension; otherwise, both the canonical min-max duality and the double-max duality still hold strongly, but the double-min duality holds weakly in a symmetrical form. Four numerical examples are presented to illustrate that this theory can be used to identify not only the global minimum, but also the largest local minimum and local maximum.Comment: 16 pages, 1 figure; J. Industrial and Management Optimization, 2011. arXiv admin note: substantial text overlap with arXiv:1104.297

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted â„“2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view
    • …
    corecore