5,882 research outputs found

    An SDP Approach For Solving Quadratic Fractional Programming Problems

    Full text link
    This paper considers a fractional programming problem (P) which minimizes a ratio of quadratic functions subject to a two-sided quadratic constraint. As is well-known, the fractional objective function can be replaced by a parametric family of quadratic functions, which makes (P) highly related to, but more difficult than a single quadratic programming problem subject to a similar constraint set. The task is to find the optimal parameter λ∗\lambda^* and then look for the optimal solution if λ∗\lambda^* is attained. Contrasted with the classical Dinkelbach method that iterates over the parameter, we propose a suitable constraint qualification under which a new version of the S-lemma with an equality can be proved so as to compute λ∗\lambda^* directly via an exact SDP relaxation. When the constraint set of (P) is degenerated to become an one-sided inequality, the same SDP approach can be applied to solve (P) {\it without any condition}. We observe that the difference between a two-sided problem and an one-sided problem lies in the fact that the S-lemma with an equality does not have a natural Slater point to hold, which makes the former essentially more difficult than the latter. This work does not, either, assume the existence of a positive-definite linear combination of the quadratic terms (also known as the dual Slater condition, or a positive-definite matrix pencil), our result thus provides a novel extension to the so-called "hard case" of the generalized trust region subproblem subject to the upper and the lower level set of a quadratic function.Comment: 26 page

    A note on the paper Fractional Programming with convex quadratic forms and functions by H.P.Benson

    Get PDF
    In this technical note we give a short proof based on standard results in convex analysis of some important characterization results listed in Theorem 3 and 4 of [1]. Actually our result is slightly general since we do not specify the convex set X. For clarity we use the same notation for the different equivalent optimization problems as done in [1]

    Max-sum diversity via convex programming

    Get PDF
    Diversity maximization is an important concept in information retrieval, computational geometry and operations research. Usually, it is a variant of the following problem: Given a ground set, constraints, and a function f(⋅)f(\cdot) that measures diversity of a subset, the task is to select a feasible subset SS such that f(S)f(S) is maximized. The \emph{sum-dispersion} function f(S)=∑x,y∈Sd(x,y)f(S) = \sum_{x,y \in S} d(x,y), which is the sum of the pairwise distances in SS, is in this context a prominent diversification measure. The corresponding diversity maximization is the \emph{max-sum} or \emph{sum-sum diversification}. Many recent results deal with the design of constant-factor approximation algorithms of diversification problems involving sum-dispersion function under a matroid constraint. In this paper, we present a PTAS for the max-sum diversification problem under a matroid constraint for distances d(⋅,⋅)d(\cdot,\cdot) of \emph{negative type}. Distances of negative type are, for example, metric distances stemming from the ℓ2\ell_2 and ℓ1\ell_1 norm, as well as the cosine or spherical, or Jaccard distance which are popular similarity metrics in web and image search

    MM Algorithms for Geometric and Signomial Programming

    Full text link
    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.Comment: 16 pages, 1 figur
    • …
    corecore