5,135 research outputs found
Approximate Convex Optimization by Online Game Playing
Lagrangian relaxation and approximate optimization algorithms have received
much attention in the last two decades. Typically, the running time of these
methods to obtain a approximate solution is proportional to
. Recently, Bienstock and Iyengar, following Nesterov,
gave an algorithm for fractional packing linear programs which runs in
iterations. The latter algorithm requires to solve a
convex quadratic program every iteration - an optimization subroutine which
dominates the theoretical running time.
We give an algorithm for convex programs with strictly convex constraints
which runs in time proportional to . The algorithm does NOT
require to solve any quadratic program, but uses gradient steps and elementary
operations only. Problems which have strictly convex constraints include
maximum entropy frequency estimation, portfolio optimization with loss risk
constraints, and various computational problems in signal processing.
As a side product, we also obtain a simpler version of Bienstock and
Iyengar's result for general linear programming, with similar running time.
We derive these algorithms using a new framework for deriving convex
optimization algorithms from online game playing algorithms, which may be of
independent interest
Max-sum diversity via convex programming
Diversity maximization is an important concept in information retrieval,
computational geometry and operations research. Usually, it is a variant of the
following problem: Given a ground set, constraints, and a function
that measures diversity of a subset, the task is to select a feasible subset
such that is maximized. The \emph{sum-dispersion} function , which is the sum of the pairwise distances in , is
in this context a prominent diversification measure. The corresponding
diversity maximization is the \emph{max-sum} or \emph{sum-sum diversification}.
Many recent results deal with the design of constant-factor approximation
algorithms of diversification problems involving sum-dispersion function under
a matroid constraint. In this paper, we present a PTAS for the max-sum
diversification problem under a matroid constraint for distances
of \emph{negative type}. Distances of negative type are, for
example, metric distances stemming from the and norm, as well
as the cosine or spherical, or Jaccard distance which are popular similarity
metrics in web and image search
An SDP Approach For Solving Quadratic Fractional Programming Problems
This paper considers a fractional programming problem (P) which minimizes a
ratio of quadratic functions subject to a two-sided quadratic constraint. As is
well-known, the fractional objective function can be replaced by a parametric
family of quadratic functions, which makes (P) highly related to, but more
difficult than a single quadratic programming problem subject to a similar
constraint set. The task is to find the optimal parameter and then
look for the optimal solution if is attained. Contrasted with the
classical Dinkelbach method that iterates over the parameter, we propose a
suitable constraint qualification under which a new version of the S-lemma with
an equality can be proved so as to compute directly via an exact
SDP relaxation. When the constraint set of (P) is degenerated to become an
one-sided inequality, the same SDP approach can be applied to solve (P) {\it
without any condition}. We observe that the difference between a two-sided
problem and an one-sided problem lies in the fact that the S-lemma with an
equality does not have a natural Slater point to hold, which makes the former
essentially more difficult than the latter. This work does not, either, assume
the existence of a positive-definite linear combination of the quadratic terms
(also known as the dual Slater condition, or a positive-definite matrix
pencil), our result thus provides a novel extension to the so-called "hard
case" of the generalized trust region subproblem subject to the upper and the
lower level set of a quadratic function.Comment: 26 page
A new algorithm for generalized fractional programs
A new dual problem for convex generalized fractional programs with no duality gap is presented and it is shown how this dual problem can be efficiently solved using a parametric approach. The resulting algorithm can be seen as “dual†to the Dinkelbach-type algorithm for generalized fractional programs since it approximates the optimal objective value of the dual (primal) problem from below. Convergence results for this algorithm are derived and an easy condition to achieve superlinear convergence is also established. Moreover, under some additional assumptions the algorithm also recovers at the same time an optimal solution of the primal problem. We also consider a variant of this new algorithm, based on scaling the “dual†parametric function. The numerical results, in case of quadratic-linear ratios and linear constraints, show that the performance of the new algorithm and its scaled version is superior to that of the Dinkelbach-type algorithms. From the computational results it also appears that contrary to the primal approach, the “dual†approach is less influenced by scaling.fractional programming;generalized fractional programming;Dinkelbach-type algorithms;quasiconvexity;Karush-Kuhn-Tucker conditions;duality
MM Algorithms for Geometric and Signomial Programming
This paper derives new algorithms for signomial programming, a generalization
of geometric programming. The algorithms are based on a generic principle for
optimization called the MM algorithm. In this setting, one can apply the
geometric-arithmetic mean inequality and a supporting hyperplane inequality to
create a surrogate function with parameters separated. Thus, unconstrained
signomial programming reduces to a sequence of one-dimensional minimization
problems. Simple examples demonstrate that the MM algorithm derived can
converge to a boundary point or to one point of a continuum of minimum points.
Conditions under which the minimum point is unique or occurs in the interior of
parameter space are proved for geometric programming. Convergence to an
interior point occurs at a linear rate. Finally, the MM framework easily
accommodates equality and inequality constraints of signomial type. For the
most important special case, constrained quadratic programming, the MM
algorithm involves very simple updates.Comment: 16 pages, 1 figur
- …