4,647 research outputs found
Online algorithms for covering and packing problems with convex objectives
We present online algorithms for covering and packing problems with (non-linear) convex objectives. The convex covering problem is defined as ...postprin
Fast Algorithms for Online Stochastic Convex Programming
We introduce the online stochastic Convex Programming (CP) problem, a very
general version of stochastic online problems which allows arbitrary concave
objectives and convex feasibility constraints. Many well-studied problems like
online stochastic packing and covering, online stochastic matching with concave
returns, etc. form a special case of online stochastic CP. We present fast
algorithms for these problems, which achieve near-optimal regret guarantees for
both the i.i.d. and the random permutation models of stochastic inputs. When
applied to the special case online packing, our ideas yield a simpler and
faster primal-dual algorithm for this well studied problem, which achieves the
optimal competitive ratio. Our techniques make explicit the connection of
primal-dual paradigm and online learning to online stochastic CP.Comment: To appear in SODA 201
Approximate Convex Optimization by Online Game Playing
Lagrangian relaxation and approximate optimization algorithms have received
much attention in the last two decades. Typically, the running time of these
methods to obtain a approximate solution is proportional to
. Recently, Bienstock and Iyengar, following Nesterov,
gave an algorithm for fractional packing linear programs which runs in
iterations. The latter algorithm requires to solve a
convex quadratic program every iteration - an optimization subroutine which
dominates the theoretical running time.
We give an algorithm for convex programs with strictly convex constraints
which runs in time proportional to . The algorithm does NOT
require to solve any quadratic program, but uses gradient steps and elementary
operations only. Problems which have strictly convex constraints include
maximum entropy frequency estimation, portfolio optimization with loss risk
constraints, and various computational problems in signal processing.
As a side product, we also obtain a simpler version of Bienstock and
Iyengar's result for general linear programming, with similar running time.
We derive these algorithms using a new framework for deriving convex
optimization algorithms from online game playing algorithms, which may be of
independent interest
How the Experts Algorithm Can Help Solve LPs Online
We consider the problem of solving packing/covering LPs online, when the
columns of the constraint matrix are presented in random order. This problem
has received much attention and the main focus is to figure out how large the
right-hand sides of the LPs have to be (compared to the entries on the
left-hand side of the constraints) to allow -approximations
online. It is known that the right-hand sides have to be times the left-hand sides, where is the number of constraints.
In this paper we give a primal-dual algorithm that achieve this bound for
mixed packing/covering LPs. Our algorithms construct dual solutions using a
regret-minimizing online learning algorithm in a black-box fashion, and use
them to construct primal solutions. The adversarial guarantee that holds for
the constructed duals helps us to take care of most of the correlations that
arise in the algorithm; the remaining correlations are handled via martingale
concentration and maximal inequalities. These ideas lead to conceptually simple
and modular algorithms, which we hope will be useful in other contexts.Comment: An extended abstract appears in the 22nd European Symposium on
Algorithms (ESA 2014
Learning to Approximate a Bregman Divergence
Bregman divergences generalize measures such as the squared Euclidean
distance and the KL divergence, and arise throughout many areas of machine
learning. In this paper, we focus on the problem of approximating an arbitrary
Bregman divergence from supervision, and we provide a well-principled approach
to analyzing such approximations. We develop a formulation and algorithm for
learning arbitrary Bregman divergences based on approximating their underlying
convex generating function via a piecewise linear function. We provide
theoretical approximation bounds using our parameterization and show that the
generalization error for metric learning using our framework
matches the known generalization error in the strictly less general Mahalanobis
metric learning setting. We further demonstrate empirically that our method
performs well in comparison to existing metric learning methods, particularly
for clustering and ranking problems.Comment: 19 pages, 4 figure
- …