148 research outputs found
A Contextual Bandit Bake-off
Contextual bandit algorithms are essential for solving many real-world
interactive machine learning problems. Despite multiple recent successes on
statistically and computationally efficient methods, the practical behavior of
these algorithms is still poorly understood. We leverage the availability of
large numbers of supervised learning datasets to empirically evaluate
contextual bandit algorithms, focusing on practical methods that learn by
relying on optimization oracles from supervised learning. We find that a recent
method (Foster et al., 2018) using optimism under uncertainty works the best
overall. A surprisingly close second is a simple greedy baseline that only
explores implicitly through the diversity of contexts, followed by a variant of
Online Cover (Agarwal et al., 2014) which tends to be more conservative but
robust to problem specification by design. Along the way, we also evaluate
various components of contextual bandit algorithm design such as loss
estimators. Overall, this is a thorough study and review of contextual bandit
methodology
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
Multi-armed bandit problems are the most basic examples of sequential
decision problems with an exploration-exploitation trade-off. This is the
balance between staying with the option that gave highest payoffs in the past
and exploring new options that might give higher payoffs in the future.
Although the study of bandit problems dates back to the Thirties,
exploration-exploitation trade-offs arise in several modern applications, such
as ad placement, website optimization, and packet routing. Mathematically, a
multi-armed bandit is defined by the payoff process associated with each
option. In this survey, we focus on two extreme cases in which the analysis of
regret is particularly simple and elegant: i.i.d. payoffs and adversarial
payoffs. Besides the basic setting of finitely many actions, we also analyze
some of the most important variants and extensions, such as the contextual
bandit model.Comment: To appear in Foundations and Trends in Machine Learnin
RELEAF: An Algorithm for Learning and Exploiting Relevance
Recommender systems, medical diagnosis, network security, etc., require
on-going learning and decision-making in real time. These -- and many others --
represent perfect examples of the opportunities and difficulties presented by
Big Data: the available information often arrives from a variety of sources and
has diverse features so that learning from all the sources may be valuable but
integrating what is learned is subject to the curse of dimensionality. This
paper develops and analyzes algorithms that allow efficient learning and
decision-making while avoiding the curse of dimensionality. We formalize the
information available to the learner/decision-maker at a particular time as a
context vector which the learner should consider when taking actions. In
general the context vector is very high dimensional, but in many settings, the
most relevant information is embedded into only a few relevant dimensions. If
these relevant dimensions were known in advance, the problem would be simple --
but they are not. Moreover, the relevant dimensions may be different for
different actions. Our algorithm learns the relevant dimensions for each
action, and makes decisions based in what it has learned. Formally, we build on
the structure of a contextual multi-armed bandit by adding and exploiting a
relevance relation. We prove a general regret bound for our algorithm whose
time order depends only on the maximum number of relevant dimensions among all
the actions, which in the special case where the relevance relation is
single-valued (a function), reduces to ; in the
absence of a relevance relation, the best known contextual bandit algorithms
achieve regret , where is the full dimension of
the context vector.Comment: to appear in IEEE Journal of Selected Topics in Signal Processing,
201
Counterfactual Risk Minimization: Learning from Logged Bandit Feedback
We develop a learning principle and an efficient algorithm for batch learning
from logged bandit feedback. This learning setting is ubiquitous in online
systems (e.g., ad placement, web search, recommendation), where an algorithm
makes a prediction (e.g., ad ranking) for a given input (e.g., query) and
observes bandit feedback (e.g., user clicks on presented ads). We first address
the counterfactual nature of the learning problem through propensity scoring.
Next, we prove generalization error bounds that account for the variance of the
propensity-weighted empirical risk estimator. These constructive bounds give
rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM
can be used to derive a new learning method -- called Policy Optimizer for
Exponential Models (POEM) -- for learning stochastic linear rules for
structured output prediction. We present a decomposition of the POEM objective
that enables efficient stochastic gradient optimization. POEM is evaluated on
several multi-label classification problems showing substantially improved
robustness and generalization performance compared to the state-of-the-art.Comment: 10 page
- …