366 research outputs found
On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms
We give a lower bound on the iteration complexity of a natural class of
Lagrangean-relaxation algorithms for approximately solving packing/covering
linear programs. We show that, given an input with random 0/1-constraints
on variables, with high probability, any such algorithm requires
iterations to compute a
-approximate solution, where is the width of the input.
The bound is tight for a range of the parameters .
The algorithms in the class include Dantzig-Wolfe decomposition, Benders'
decomposition, Lagrangean relaxation as developed by Held and Karp [1971] for
lower-bounding TSP, and many others (e.g. by Plotkin, Shmoys, and Tardos [1988]
and Grigoriadis and Khachiyan [1996]). To prove the bound, we use a discrepancy
argument to show an analogous lower bound on the support size of
-approximate mixed strategies for random two-player zero-sum
0/1-matrix games
The Computational Power of Optimization in Online Learning
We consider the fundamental problem of prediction with expert advice where
the experts are "optimizable": there is a black-box optimization oracle that
can be used to compute, in constant time, the leading expert in retrospect at
any point in time. In this setting, we give a novel online algorithm that
attains vanishing regret with respect to experts in total
computation time. We also give a lower bound showing
that this running time cannot be improved (up to log factors) in the oracle
model, thereby exhibiting a quadratic speedup as compared to the standard,
oracle-free setting where the required time for vanishing regret is
. These results demonstrate an exponential gap between
the power of optimization in online learning and its power in statistical
learning: in the latter, an optimization oracle---i.e., an efficient empirical
risk minimizer---allows to learn a finite hypothesis class of size in time
. We also study the implications of our results to learning in
repeated zero-sum games, in a setting where the players have access to oracles
that compute, in constant time, their best-response to any mixed strategy of
their opponent. We show that the runtime required for approximating the minimax
value of the game in this setting is , yielding
again a quadratic improvement upon the oracle-free setting, where
is known to be tight
- …