2 research outputs found
Stochastic Bandits for Multi-platform Budget Optimization in Online Advertising
We study the problem of an online advertising system that wants to optimally
spend an advertiser's given budget for a campaign across multiple platforms,
without knowing the value for showing an ad to the users on those platforms. We
model this challenging practical application as a Stochastic Bandits with
Knapsacks problem over rounds of bidding with the set of arms given by the
set of distinct bidding -tuples, where is the number of platforms. We
modify the algorithm proposed in Badanidiyuru \emph{et al.,} to extend it to
the case of multiple platforms to obtain an algorithm for both the discrete and
continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm
with regret , where
is the performance of the optimal algorithm that knows the distributions.
For continuous bid spaces the regret of our algorithm is
. When restricted to this special-case, this bound improves over
Sankararaman and Slivkins in the regime , as is the case in the
particular application at hand. Second, we show an lower bound for the discrete case and an lower bound for the continuous setting, almost matching the
upper bounds. Finally, we use a real-world data set from a large internet
online advertising company with multiple ad platforms and show that our
algorithms outperform common benchmarks and satisfy the required properties
warranted in the real-world application
MNL-Bandit: A Dynamic Learning Approach to Assortment Selection
We consider a dynamic assortment selection problem, where in every round the
retailer offers a subset (assortment) of substitutable products to a
consumer, who selects one of these products according to a multinomial logit
(MNL) choice model. The retailer observes this choice and the objective is to
dynamically learn the model parameters, while optimizing cumulative revenues
over a selling horizon of length . We refer to this exploration-exploitation
formulation as the MNL-Bandit problem. Existing methods for this problem follow
an "explore-then-exploit" approach, which estimate parameters to a desired
accuracy and then, treating these estimates as if they are the correct
parameter values, offers the optimal assortment based on these estimates. These
approaches require certain a priori knowledge of "separability", determined by
the true parameters of the underlying MNL model, and this in turn is critical
in determining the length of the exploration period. (Separability refers to
the distinguishability of the true optimal assortment from the other
sub-optimal alternatives.) In this paper, we give an efficient algorithm that
simultaneously explores and exploits, achieving performance independent of the
underlying parameters. The algorithm can be implemented in a fully online
manner, without knowledge of the horizon length . Furthermore, the algorithm
is adaptive in the sense that its performance is near-optimal in both the "well
separated" case, as well as the general parameter setting where this separation
need not hold