395 research outputs found

    An Infeasible-Point Subgradient Method Using Adaptive Approximate Projections

    Full text link
    We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyak-type step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum l1-norm solution to an underdetermined linear system, an important problem in Compressed Sensing.Comment: 36 pages, 3 figure

    Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study

    Full text link
    This paper proposes a method for construction of approximate feasible primal solutions from dual ones for large-scale optimization problems possessing certain separability properties. Whereas infeasible primal estimates can typically be produced from (sub-)gradients of the dual function, it is often not easy to project them to the primal feasible set, since the projection itself has a complexity comparable to the complexity of the initial problem. We propose an alternative efficient method to obtain feasibility and show that its properties influencing the convergence to the optimum are similar to the properties of the Euclidean projection. We apply our method to the local polytope relaxation of inference problems for Markov Random Fields and demonstrate its superiority over existing methods.Comment: 20 page, 4 figure

    MAP inference via Block-Coordinate Frank-Wolfe Algorithm

    Full text link
    We present a new proximal bundle method for Maximum-A-Posteriori (MAP) inference in structured energy minimization problems. The method optimizes a Lagrangean relaxation of the original energy minimization problem using a multi plane block-coordinate Frank-Wolfe method that takes advantage of the specific structure of the Lagrangean decomposition. We show empirically that our method outperforms state-of-the-art Lagrangean decomposition based algorithms on some challenging Markov Random Field, multi-label discrete tomography and graph matching problems

    The Convergence Guarantees of a Non-convex Approach for Sparse Recovery

    Full text link
    In the area of sparse recovery, numerous researches hint that non-convex penalties might induce better sparsity than convex ones, but up until now those corresponding non-convex algorithms lack convergence guarantees from the initial solution to the global optimum. This paper aims to provide performance guarantees of a non-convex approach for sparse recovery. Specifically, the concept of weak convexity is incorporated into a class of sparsity-inducing penalties to characterize the non-convexity. Borrowing the idea of the projected subgradient method, an algorithm is proposed to solve the non-convex optimization problem. In addition, a uniform approximate projection is adopted in the projection step to make this algorithm computationally tractable for large scale problems. The convergence analysis is provided in the noisy scenario. It is shown that if the non-convexity of the penalty is below a threshold (which is in inverse proportion to the distance between the initial solution and the sparse signal), the recovered solution has recovery error linear in both the step size and the noise term. Numerical simulations are implemented to test the performance of the proposed approach and verify the theoretical analysis.Comment: 33 pages, 7 figure

    Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

    Full text link
    The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, we show that the sequential subgradient projection method is perturbation resilient. By this we mean that under appropriate conditions the sequence generated by the method converges weakly, and sometimes also strongly, to a point in the intersection of the given subsets of the feasibility problem, despite certain perturbations which are allowed in each iterative step. Unlike previous works on solving the convex feasibility problem, the involved functions, which induce the feasibility problem's subsets, need not be convex. Instead, we allow them to belong to a wider and richer class of functions satisfying a weaker condition, that we call "zero-convexity". This class, which is introduced and discussed here, holds a promise to solve optimization problems in various areas, especially in non-smooth and non-convex optimization. The relevance of this study to approximate minimization and to the recent superiorization methodology for constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio

    A generalized projection-based scheme for solving convex constrained optimization problems

    Full text link
    In this paper we present a new algorithmic realization of a projection-based scheme for general convex constrained optimization problem. The general idea is to transform the original optimization problem to a sequence of feasibility problems by iteratively constraining the objective function from above until the feasibility problem is inconsistent. For each of the feasibility problems one may apply any of the existing projection methods for solving it. In particular, the scheme allows the use of subgradient projections and does not require exact projections onto the constraints sets as in existing similar methods. We also apply the newly introduced concept of superiorization to optimization formulation and compare its performance to our scheme. We provide some numerical results for convex quadratic test problems as well as for real-life optimization problems coming from medical treatment planning.Comment: Accepted to publication in Computational Optimization and Application

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness
    • …
    corecore