1,463 research outputs found

    Large-scale Binary Quadratic Optimization Using Semidefinite Relaxation and Applications

    Full text link
    In computer vision, many problems such as image segmentation, pixel labelling, and scene parsing can be formulated as binary quadratic programs (BQPs). For submodular problems, cuts based methods can be employed to efficiently solve large-scale problems. However, general nonsubmodular problems are significantly more challenging to solve. Finding a solution when the problem is of large size to be of practical interest, however, typically requires relaxation. Two standard relaxation methods are widely used for solving general BQPs--spectral methods and semidefinite programming (SDP), each with their own advantages and disadvantages. Spectral relaxation is simple and easy to implement, but its bound is loose. Semidefinite relaxation has a tighter bound, but its computational complexity is high, especially for large scale problems. In this work, we present a new SDP formulation for BQPs, with two desirable properties. First, it has a similar relaxation bound to conventional SDP formulations. Second, compared with conventional SDP methods, the new SDP formulation leads to a significantly more efficient and scalable dual optimization approach, which has the same degree of complexity as spectral methods. We then propose two solvers, namely, quasi-Newton and smoothing Newton methods, for the dual problem. Both of them are significantly more efficiently than standard interior-point methods. In practice, the smoothing Newton solver is faster than the quasi-Newton solver for dense or medium-sized problems, while the quasi-Newton solver is preferable for large sparse/structured problems. Our experiments on a few computer vision applications including clustering, image segmentation, co-segmentation and registration show the potential of our SDP formulation for solving large-scale BQPs.Comment: Fixed some typos. 18 pages. Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    A new primal-dual path-following interior-point algorithm for semidefinite optimization

    Get PDF
    AbstractIn this paper we present a new primal-dual path-following interior-point algorithm for semidefinite optimization. The algorithm is based on a new technique for finding the search direction and the strategy of the central path. At each iteration, we use only full Nesterov–Todd step. Moreover, we obtain the currently best known iteration bound for the algorithm with small-update method, namely, O(nlognϵ), which is as good as the linear analogue

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted â„“2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view

    Lifts of convex sets and cone factorizations

    Get PDF
    In this paper we address the basic geometric question of when a given convex set is the image under a linear map of an affine slice of a given closed convex cone. Such a representation or 'lift' of the convex set is especially useful if the cone admits an efficient algorithm for linear optimization over its affine slices. We show that the existence of a lift of a convex set to a cone is equivalent to the existence of a factorization of an operator associated to the set and its polar via elements in the cone and its dual. This generalizes a theorem of Yannakakis that established a connection between polyhedral lifts of a polytope and nonnegative factorizations of its slack matrix. Symmetric lifts of convex sets can also be characterized similarly. When the cones live in a family, our results lead to the definition of the rank of a convex set with respect to this family. We present results about this rank in the context of cones of positive semidefinite matrices. Our methods provide new tools for understanding cone lifts of convex sets.Comment: 20 pages, 2 figure

    On Minimal Valid Inequalities for Mixed Integer Conic Programs

    Full text link
    We study disjunctive conic sets involving a general regular (closed, convex, full dimensional, and pointed) cone K such as the nonnegative orthant, the Lorentz cone or the positive semidefinite cone. In a unified framework, we introduce K-minimal inequalities and show that under mild assumptions, these inequalities together with the trivial cone-implied inequalities are sufficient to describe the convex hull. We study the properties of K-minimal inequalities by establishing algebraic necessary conditions for an inequality to be K-minimal. This characterization leads to a broader algebraically defined class of K- sublinear inequalities. We establish a close connection between K-sublinear inequalities and the support functions of sets with a particular structure. This connection results in practical ways of showing that a given inequality is K-sublinear and K-minimal. Our framework generalizes some of the results from the mixed integer linear case. It is well known that the minimal inequalities for mixed integer linear programs are generated by sublinear (positively homogeneous, subadditive and convex) functions that are also piecewise linear. This result is easily recovered by our analysis. Whenever possible we highlight the connections to the existing literature. However, our study unveils that such a cut generating function view treating the data associated with each individual variable independently is not possible in the case of general cones other than nonnegative orthant, even when the cone involved is the Lorentz cone

    ROI: An extensible R Optimization Infrastructure

    Get PDF
    Optimization plays an important role in many methods routinely used in statistics, machine learning and data science. Often, implementations of these methods rely on highly specialized optimization algorithms, designed to be only applicable within a specific application. However, in many instances recent advances, in particular in the field of convex optimization, make it possible to conveniently and straightforwardly use modern solvers instead with the advantage of enabling broader usage scenarios and thus promoting reusability. This paper introduces the R Optimization Infrastructure which provides an extensible infrastructure to model linear, quadratic, conic and general nonlinear optimization problems in a consistent way. Furthermore, the infrastructure administers many different solvers, reformulations, problem collections and functions to read and write optimization problems in various formats.Series: Research Report Series / Department of Statistics and Mathematic
    • …
    corecore