5,882 research outputs found
An SDP Approach For Solving Quadratic Fractional Programming Problems
This paper considers a fractional programming problem (P) which minimizes a
ratio of quadratic functions subject to a two-sided quadratic constraint. As is
well-known, the fractional objective function can be replaced by a parametric
family of quadratic functions, which makes (P) highly related to, but more
difficult than a single quadratic programming problem subject to a similar
constraint set. The task is to find the optimal parameter and then
look for the optimal solution if is attained. Contrasted with the
classical Dinkelbach method that iterates over the parameter, we propose a
suitable constraint qualification under which a new version of the S-lemma with
an equality can be proved so as to compute directly via an exact
SDP relaxation. When the constraint set of (P) is degenerated to become an
one-sided inequality, the same SDP approach can be applied to solve (P) {\it
without any condition}. We observe that the difference between a two-sided
problem and an one-sided problem lies in the fact that the S-lemma with an
equality does not have a natural Slater point to hold, which makes the former
essentially more difficult than the latter. This work does not, either, assume
the existence of a positive-definite linear combination of the quadratic terms
(also known as the dual Slater condition, or a positive-definite matrix
pencil), our result thus provides a novel extension to the so-called "hard
case" of the generalized trust region subproblem subject to the upper and the
lower level set of a quadratic function.Comment: 26 page
A note on the paper Fractional Programming with convex quadratic forms and functions by H.P.Benson
In this technical note we give a short proof based on standard results in convex analysis of
some important characterization results listed in Theorem 3 and 4 of [1]. Actually our result is
slightly general since we do not specify the convex set X. For clarity we use the same notation
for the different equivalent optimization problems as done in [1]
Max-sum diversity via convex programming
Diversity maximization is an important concept in information retrieval,
computational geometry and operations research. Usually, it is a variant of the
following problem: Given a ground set, constraints, and a function
that measures diversity of a subset, the task is to select a feasible subset
such that is maximized. The \emph{sum-dispersion} function , which is the sum of the pairwise distances in , is
in this context a prominent diversification measure. The corresponding
diversity maximization is the \emph{max-sum} or \emph{sum-sum diversification}.
Many recent results deal with the design of constant-factor approximation
algorithms of diversification problems involving sum-dispersion function under
a matroid constraint. In this paper, we present a PTAS for the max-sum
diversification problem under a matroid constraint for distances
of \emph{negative type}. Distances of negative type are, for
example, metric distances stemming from the and norm, as well
as the cosine or spherical, or Jaccard distance which are popular similarity
metrics in web and image search
Recommended from our members
A Framework for Globally Optimizing Mixed-Integer Signomial Programs
Mixed-integer signomial optimization problems have broad applicability in engineering. Extending the Global Mixed-Integer Quadratic Optimizer, GloMIQO (Misener, Floudas in J. Glob. Optim., 2012. doi:10.1007/s10898-012-9874-7), this manuscript documents a computational framework for deterministically addressing mixed-integer signomial optimization problems to ε-global optimality. This framework generalizes the GloMIQO strategies of (1) reformulating user input, (2) detecting special mathematical structure, and (3) globally optimizing the mixed-integer nonconvex program. Novel contributions of this paper include: flattening an expression tree towards term-based data structures; introducing additional nonconvex terms to interlink expressions; integrating a dynamic implementation of the reformulation-linearization technique into the branch-and-cut tree; designing term-based underestimators that specialize relaxation strategies according to variable bounds in the current tree node. Computational results are presented along with comparison of the computational framework to several state-of-the-art solvers. © 2013 Springer Science+Business Media New York
MM Algorithms for Geometric and Signomial Programming
This paper derives new algorithms for signomial programming, a generalization
of geometric programming. The algorithms are based on a generic principle for
optimization called the MM algorithm. In this setting, one can apply the
geometric-arithmetic mean inequality and a supporting hyperplane inequality to
create a surrogate function with parameters separated. Thus, unconstrained
signomial programming reduces to a sequence of one-dimensional minimization
problems. Simple examples demonstrate that the MM algorithm derived can
converge to a boundary point or to one point of a continuum of minimum points.
Conditions under which the minimum point is unique or occurs in the interior of
parameter space are proved for geometric programming. Convergence to an
interior point occurs at a linear rate. Finally, the MM framework easily
accommodates equality and inequality constraints of signomial type. For the
most important special case, constrained quadratic programming, the MM
algorithm involves very simple updates.Comment: 16 pages, 1 figur
- …