778 research outputs found

    Large-scale Binary Quadratic Optimization Using Semidefinite Relaxation and Applications

    Full text link
    In computer vision, many problems such as image segmentation, pixel labelling, and scene parsing can be formulated as binary quadratic programs (BQPs). For submodular problems, cuts based methods can be employed to efficiently solve large-scale problems. However, general nonsubmodular problems are significantly more challenging to solve. Finding a solution when the problem is of large size to be of practical interest, however, typically requires relaxation. Two standard relaxation methods are widely used for solving general BQPs--spectral methods and semidefinite programming (SDP), each with their own advantages and disadvantages. Spectral relaxation is simple and easy to implement, but its bound is loose. Semidefinite relaxation has a tighter bound, but its computational complexity is high, especially for large scale problems. In this work, we present a new SDP formulation for BQPs, with two desirable properties. First, it has a similar relaxation bound to conventional SDP formulations. Second, compared with conventional SDP methods, the new SDP formulation leads to a significantly more efficient and scalable dual optimization approach, which has the same degree of complexity as spectral methods. We then propose two solvers, namely, quasi-Newton and smoothing Newton methods, for the dual problem. Both of them are significantly more efficiently than standard interior-point methods. In practice, the smoothing Newton solver is faster than the quasi-Newton solver for dense or medium-sized problems, while the quasi-Newton solver is preferable for large sparse/structured problems. Our experiments on a few computer vision applications including clustering, image segmentation, co-segmentation and registration show the potential of our SDP formulation for solving large-scale BQPs.Comment: Fixed some typos. 18 pages. Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    On the finite termination of an entropy function based smoothing Newton method for vertical linear complementarity problems

    Get PDF
    By using a smooth entropy function to approximate the non-smooth max-type function, a vertical linear complementarity problem (VLCP) can be treated as a family of parameterized smooth equations. A Newton-type method with a testing procedure is proposed to solve such a system. We show that the proposed algorithm finds an exact solution of VLCP in a finite number of iterations, under some conditions milder than those assumed in literature. Some computational results are included to illustrate the potential of this approach.Newton method;Finite termination;Entropy function;Smoothing approximation;Vertical linear complementarity problems

    The Reduced Order Method for Solving the Linear Complementarity Problem with an M-Matrix

    Get PDF
    In this paper, by seeking the zero and the positive entry positions of the solution, we provide a direct method, called the reduced order method, for solving the linear complementarity problem with an M-matrix. By this method, the linear complementarity problem is transformed into a low order linear complementarity problem with some low order linear equations and the solution is constructed by the solution of the low order linear complementarity problem and the solutions of these low order linear equations in the transformations. In order to show the accuracy and the effectiveness of the method, the corresponding numerical experiments are performed

    Conic Optimization: Optimal Partition, Parametric, and Stability Analysis

    Get PDF
    A linear conic optimization problem consists of the minimization of a linear objective function over the intersection of an affine space and a closed convex cone. In recent years, linear conic optimization has received significant attention, partly due to the fact that we can take advantage of linear conic optimization to reformulate and approximate intractable optimization problems. Steady advances in computational optimization have enabled us to approximately solve a wide variety of linear conic optimization problems in polynomial time. Nevertheless, preprocessing methods, rounding procedures and sensitivity analysis tools are still the missing parts of conic optimization solvers. Given the output of a conic optimization solver, we need methodologies to generate approximate complementary solutions or to speed up the convergence to an exact optimal solution. A preprocessing method reduces the size of a problem by finding the minimal face of the cone which contains the set of feasible solutions. However, such a preprocessing method assumes the knowledge of an exact solution. More importantly, we need robust sensitivity and post-optimal analysis tools for an optimal solution of a linear conic optimization problem. Motivated by the vital importance of linear conic optimization, we take active steps to fill this gap.This thesis is concerned with several aspects of a linear conic optimization problem, from algorithm through solution identification, to parametric analysis, which have not been fully addressed in the literature. We specifically focus on three special classes of linear conic optimization problems, namely semidefinite and second-order conic optimization, and their common generalization, symmetric conic optimization. We propose a polynomial time algorithm for symmetric conic optimization problems. We show how to approximate/identify the optimal partition of semidefinite optimization and second-order conic optimization, a concept which has its origin in linear optimization. Further, we use the optimal partition information to either generate an approximate optimal solution or to speed up the convergence of a solution identification process to the unique optimal solution of the problem. Finally, we study the parametric analysis of semidefinite and second-order conic optimization problems. We investigate the behavior of the optimal partition and the optimal set mapping under perturbation of the objective function vector

    Fast Recovery and Approximation of Hidden Cauchy Structure

    Full text link
    We derive an algorithm of optimal complexity which determines whether a given matrix is a Cauchy matrix, and which exactly recovers the Cauchy points defining a Cauchy matrix from the matrix entries. Moreover, we study how to approximate a given matrix by a Cauchy matrix with a particular focus on the recovery of Cauchy points from noisy data. We derive an approximation algorithm of optimal complexity for this task, and prove approximation bounds. Numerical examples illustrate our theoretical results
    • …
    corecore