990 research outputs found
Approximate Convex Optimization by Online Game Playing
Lagrangian relaxation and approximate optimization algorithms have received
much attention in the last two decades. Typically, the running time of these
methods to obtain a approximate solution is proportional to
. Recently, Bienstock and Iyengar, following Nesterov,
gave an algorithm for fractional packing linear programs which runs in
iterations. The latter algorithm requires to solve a
convex quadratic program every iteration - an optimization subroutine which
dominates the theoretical running time.
We give an algorithm for convex programs with strictly convex constraints
which runs in time proportional to . The algorithm does NOT
require to solve any quadratic program, but uses gradient steps and elementary
operations only. Problems which have strictly convex constraints include
maximum entropy frequency estimation, portfolio optimization with loss risk
constraints, and various computational problems in signal processing.
As a side product, we also obtain a simpler version of Bienstock and
Iyengar's result for general linear programming, with similar running time.
We derive these algorithms using a new framework for deriving convex
optimization algorithms from online game playing algorithms, which may be of
independent interest
On the Minimization of Convex Functionals of Probability Distributions Under Band Constraints
The problem of minimizing convex functionals of probability distributions is
solved under the assumption that the density of every distribution is bounded
from above and below. A system of sufficient and necessary first-order
optimality conditions as well as a bound on the optimality gap of feasible
candidate solutions are derived. Based on these results, two numerical
algorithms are proposed that iteratively solve the system of optimality
conditions on a grid of discrete points. Both algorithms use a block coordinate
descent strategy and terminate once the optimality gap falls below the desired
tolerance. While the first algorithm is conceptually simpler and more
efficient, it is not guaranteed to converge for objective functions that are
not strictly convex. This shortcoming is overcome in the second algorithm,
which uses an additional outer proximal iteration, and, which is proven to
converge under mild assumptions. Two examples are given to demonstrate the
theoretical usefulness of the optimality conditions as well as the high
efficiency and accuracy of the proposed numerical algorithms.Comment: 13 pages, 5 figures, 2 tables, published in the IEEE Transactions on
Signal Processing. In previous versions, the example in Section VI.B
contained some mistakes and inaccuracies, which have been fixed in this
versio
- …