2 research outputs found

    Stochastic Bandits for Multi-platform Budget Optimization in Online Advertising

    Full text link
    We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over TT rounds of bidding with the set of arms given by the set of distinct bidding mm-tuples, where mm is the number of platforms. We modify the algorithm proposed in Badanidiyuru \emph{et al.,} to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret O(OPTmnB+mnOPT)O\left(OPT \sqrt {\frac{mn}{B} }+ \sqrt{mn OPT}\right), where OPTOPT is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is O~(m1/3min{B2/3,(mT)2/3})\tilde{O}\left(m^{1/3} \cdot \min\left\{ B^{2/3}, (m T)^{2/3} \right\} \right). When restricted to this special-case, this bound improves over Sankararaman and Slivkins in the regime OPTTOPT \ll T, as is the case in the particular application at hand. Second, we show an Ω(mOPT) \Omega\left (\sqrt {m OPT} \right) lower bound for the discrete case and an Ω(m1/3B2/3)\Omega\left( m^{1/3} B^{2/3}\right) lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application

    MNL-Bandit: A Dynamic Learning Approach to Assortment Selection

    Full text link
    We consider a dynamic assortment selection problem, where in every round the retailer offers a subset (assortment) of NN substitutable products to a consumer, who selects one of these products according to a multinomial logit (MNL) choice model. The retailer observes this choice and the objective is to dynamically learn the model parameters, while optimizing cumulative revenues over a selling horizon of length TT. We refer to this exploration-exploitation formulation as the MNL-Bandit problem. Existing methods for this problem follow an "explore-then-exploit" approach, which estimate parameters to a desired accuracy and then, treating these estimates as if they are the correct parameter values, offers the optimal assortment based on these estimates. These approaches require certain a priori knowledge of "separability", determined by the true parameters of the underlying MNL model, and this in turn is critical in determining the length of the exploration period. (Separability refers to the distinguishability of the true optimal assortment from the other sub-optimal alternatives.) In this paper, we give an efficient algorithm that simultaneously explores and exploits, achieving performance independent of the underlying parameters. The algorithm can be implemented in a fully online manner, without knowledge of the horizon length TT. Furthermore, the algorithm is adaptive in the sense that its performance is near-optimal in both the "well separated" case, as well as the general parameter setting where this separation need not hold
    corecore