415 research outputs found
Safe Exploration for Optimizing Contextual Bandits
Contextual bandit problems are a natural fit for many information retrieval
tasks, such as learning to rank, text classification, recommendation, etc.
However, existing learning methods for contextual bandit problems have one of
two drawbacks: they either do not explore the space of all possible document
rankings (i.e., actions) and, thus, may miss the optimal ranking, or they
present suboptimal rankings to a user and, thus, may harm the user experience.
We introduce a new learning method for contextual bandit problems, Safe
Exploration Algorithm (SEA), which overcomes the above drawbacks. SEA starts by
using a baseline (or production) ranking system (i.e., policy), which does not
harm the user experience and, thus, is safe to execute, but has suboptimal
performance and, thus, needs to be improved. Then SEA uses counterfactual
learning to learn a new policy based on the behavior of the baseline policy.
SEA also uses high-confidence off-policy evaluation to estimate the performance
of the newly learned policy. Once the performance of the newly learned policy
is at least as good as the performance of the baseline policy, SEA starts using
the new policy to execute new actions, allowing it to actively explore
favorable regions of the action space. This way, SEA never performs worse than
the baseline policy and, thus, does not harm the user experience, while still
exploring the action space and, thus, being able to find an optimal policy. Our
experiments using text classification and document retrieval confirm the above
by comparing SEA (and a boundless variant called BSEA) to online and offline
learning methods for contextual bandit problems.Comment: 23 pages, 3 figure
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants.Comment: In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 201
Counterfactual Risk Minimization: Learning from Logged Bandit Feedback
We develop a learning principle and an efficient algorithm for batch learning
from logged bandit feedback. This learning setting is ubiquitous in online
systems (e.g., ad placement, web search, recommendation), where an algorithm
makes a prediction (e.g., ad ranking) for a given input (e.g., query) and
observes bandit feedback (e.g., user clicks on presented ads). We first address
the counterfactual nature of the learning problem through propensity scoring.
Next, we prove generalization error bounds that account for the variance of the
propensity-weighted empirical risk estimator. These constructive bounds give
rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM
can be used to derive a new learning method -- called Policy Optimizer for
Exponential Models (POEM) -- for learning stochastic linear rules for
structured output prediction. We present a decomposition of the POEM objective
that enables efficient stochastic gradient optimization. POEM is evaluated on
several multi-label classification problems showing substantially improved
robustness and generalization performance compared to the state-of-the-art.Comment: 10 page
Better safe than sorry: Risky function exploitation through safe optimization
Exploration-exploitation of functions, that is learning and optimizing a
mapping between inputs and expected outputs, is ubiquitous to many real world
situations. These situations sometimes require us to avoid certain outcomes at
all cost, for example because they are poisonous, harmful, or otherwise
dangerous. We test participants' behavior in scenarios in which they have to
find the optimum of a function while at the same time avoid outputs below a
certain threshold. In two experiments, we find that Safe-Optimization, a
Gaussian Process-based exploration-exploitation algorithm, describes
participants' behavior well and that participants seem to care firstly whether
a point is safe and then try to pick the optimal point from all such safe
points. This means that their trade-off between exploration and exploitation
can be seen as an intelligent, approximate, and homeostasis-driven strategy.Comment: 6 pages, submitted to Cognitive Science Conferenc
- …