159 research outputs found
Information-based complexity, feedback and dynamics in convex programming
We study the intrinsic limitations of sequential convex optimization through
the lens of feedback information theory. In the oracle model of optimization,
an algorithm queries an {\em oracle} for noisy information about the unknown
objective function, and the goal is to (approximately) minimize every function
in a given class using as few queries as possible. We show that, in order for a
function to be optimized, the algorithm must be able to accumulate enough
information about the objective. This, in turn, puts limits on the speed of
optimization under specific assumptions on the oracle and the type of feedback.
Our techniques are akin to the ones used in statistical literature to obtain
minimax lower bounds on the risks of estimation procedures; the notable
difference is that, unlike in the case of i.i.d. data, a sequential
optimization algorithm can gather observations in a {\em controlled} manner, so
that the amount of information at each step is allowed to change in time. In
particular, we show that optimization algorithms often obey the law of
diminishing returns: the signal-to-noise ratio drops as the optimization
algorithm approaches the optimum. To underscore the generality of the tools, we
use our approach to derive fundamental lower bounds for a certain active
learning problem. Overall, the present work connects the intuitive notions of
information in optimization, experimental design, estimation, and active
learning to the quantitative notion of Shannon information.Comment: final version; to appear in IEEE Transactions on Information Theor
No Internal Regret via Neighborhood Watch
We present an algorithm which attains O(\sqrt{T}) internal (and thus
external) regret for finite games with partial monitoring under the local
observability condition. Recently, this condition has been shown by (Bartok,
Pal, and Szepesvari, 2011) to imply the O(\sqrt{T}) rate for partial monitoring
games against an i.i.d. opponent, and the authors conjectured that the same
holds for non-stochastic adversaries. Our result is in the affirmative, and it
completes the characterization of possible rates for finite partial-monitoring
games, an open question stated by (Cesa-Bianchi, Lugosi, and Stoltz, 2006). Our
regret guarantees also hold for the more general model of partial monitoring
with random signals
Optimization, Learning, and Games with Predictable Sequences
We provide several applications of Optimistic Mirror Descent, an online
learning algorithm based on the idea of predictable sequences. First, we
recover the Mirror Prox algorithm for offline optimization, prove an extension
to Holder-smooth functions, and apply the results to saddle-point type
problems. Next, we prove that a version of Optimistic Mirror Descent (which has
a close relation to the Exponential Weights algorithm) can be used by two
strongly-uncoupled players in a finite zero-sum matrix game to converge to the
minimax equilibrium at the rate of O((log T)/T). This addresses a question of
Daskalakis et al 2011. Further, we consider a partial information version of
the problem. We then apply the results to convex programming and exhibit a
simple algorithm for the approximate Max Flow problem
Hierarchies of Relaxations for Online Prediction Problems with Evolving Constraints
We study online prediction where regret of the algorithm is measured against
a benchmark defined via evolving constraints. This framework captures online
prediction on graphs, as well as other prediction problems with combinatorial
structure. A key aspect here is that finding the optimal benchmark predictor
(even in hindsight, given all the data) might be computationally hard due to
the combinatorial nature of the constraints. Despite this, we provide
polynomial-time \emph{prediction} algorithms that achieve low regret against
combinatorial benchmark sets. We do so by building improper learning algorithms
based on two ideas that work together. The first is to alleviate part of the
computational burden through random playout, and the second is to employ
Lasserre semidefinite hierarchies to approximate the resulting integer program.
Interestingly, for our prediction algorithms, we only need to compute the
values of the semidefinite programs and not the rounded solutions. However, the
integrality gap for Lasserre hierarchy \emph{does} enter the generic regret
bound in terms of Rademacher complexity of the benchmark set. This establishes
a trade-off between the computation time and the regret bound of the algorithm
- …