2 research outputs found
Algorithms with Logarithmic or Sublinear Regret for Constrained Contextual Bandits
We study contextual bandits with budget and time constraints, referred to as
constrained contextual bandits.The time and budget constraints significantly
complicate the exploration and exploitation tradeoff because they introduce
complex coupling among contexts over time.Such coupling effects make it
difficult to obtain oracle solutions that assume known statistics of bandits.
To gain insight, we first study unit-cost systems with known context
distribution. When the expected rewards are known, we develop an approximation
of the oracle, referred to Adaptive-Linear-Programming (ALP), which achieves
near-optimality and only requires the ordering of expected rewards. With these
highly desirable features, we then combine ALP with the upper-confidence-bound
(UCB) method in the general case where the expected rewards are unknown {\it a
priori}. We show that the proposed UCB-ALP algorithm achieves logarithmic
regret except for certain boundary cases. Further, we design algorithms and
obtain similar regret analysis results for more general systems with unknown
context distribution and heterogeneous costs. To the best of our knowledge,
this is the first work that shows how to achieve logarithmic regret in
constrained contextual bandits. Moreover, this work also sheds light on the
study of computationally efficient algorithms for general constrained
contextual bandits.Comment: 36 pages, 4 figures; accepted by the 29th Annual Conference on Neural
Information Processing Systems (NIPS), Montr\'eal, Canada, Dec. 201
Matching while Learning
We consider the problem faced by a service platform that needs to match
limited supply with demand but also to learn the attributes of new users in
order to match them better in the future. We introduce a benchmark model with
heterogeneous "workers" (demand) and a limited supply of "jobs" that arrive
over time. Job types are known to the platform, but worker types are unknown
and must be learned by observing match outcomes. Workers depart after
performing a certain number of jobs. The expected payoff from a match depends
on the pair of types and the goal is to maximize the steady-state rate of
accumulation of payoff. Though we use terminology inspired by labor markets,
our framework applies more broadly to platforms where a limited supply of
heterogeneous products is matched to users over time.
Our main contribution is a complete characterization of the structure of the
optimal policy in the limit that each worker performs many jobs. The platform
faces a trade-off for each worker between myopically maximizing payoffs
(exploitation) and learning the type of the worker (exploration). This creates
a multitude of multi-armed bandit problems, one for each worker, coupled
together by the constraint on availability of jobs of different types (capacity
constraints). We find that the platform should estimate a shadow price for each
job type, and use the payoffs adjusted by these prices, first, to determine its
learning goals and then, for each worker, (i) to balance learning with payoffs
during the "exploration phase," and (ii) to myopically match after it has
achieved its learning goals during the "exploitation phase."Comment: This paper has been accepted for publication in Operations Researc