97 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Borda Regret Minimization for Generalized Linear Dueling Bandits

    Full text link
    Dueling bandits are widely used to model preferential feedback prevalent in many applications such as recommendation systems and ranking. In this paper, we study the Borda regret minimization problem for dueling bandits, which aims to identify the item with the highest Borda score while minimizing the cumulative regret. We propose a rich class of generalized linear dueling bandit models, which cover many existing models. We first prove a regret lower bound of order Ω(d2/3T2/3)\Omega(d^{2/3} T^{2/3}) for the Borda regret minimization problem, where dd is the dimension of contextual vectors and TT is the time horizon. To attain this lower bound, we propose an explore-then-commit type algorithm for the stochastic setting, which has a nearly matching regret upper bound O~(d2/3T2/3)\tilde{O}(d^{2/3} T^{2/3}). We also propose an EXP3-type algorithm for the adversarial linear setting, where the underlying model parameter can change at each round. Our algorithm achieves an O~(d2/3T2/3)\tilde{O}(d^{2/3} T^{2/3}) regret, which is also optimal. Empirical evaluations on both synthetic data and a simulated real-world environment are conducted to corroborate our theoretical analysis.Comment: 33 pages, 5 figure. This version includes new results for dueling bandits in the adversarial settin

    Bowdoin College Catalogue and Academic Handbook (2023-2024)

    Get PDF
    https://digitalcommons.bowdoin.edu/course-catalogues/1321/thumbnail.jp

    Offline Evaluation via Human Preference Judgments: A Dueling Bandits Problem

    Get PDF
    The dramatic improvements in core information retrieval tasks engendered by neural rankers create a need for novel evaluation methods. If every ranker returns highly relevant items in the top ranks, it becomes difficult to recognize meaningful differences between them and to build reusable test collections. Several recent papers explore pairwise preference judgments as an alternative to traditional graded relevance assessments. Rather than viewing items one at a time, assessors view items side-by-side and indicate the one that provides the better response to a query, allowing fine-grained distinctions. If we employ preference judgments to identify the probably best items for each query, we can measure rankers by their ability to place these items as high as possible. I frame the problem of finding best items as a dueling bandits problem. While many papers explore dueling bandits for online ranker evaluation via interleaving, they have not been considered as a framework for offline evaluation via human preference judgments. I review the literature for possible solutions. For human preference judgments, any usable algorithm must tolerate ties since two items may appear nearly equal to assessors. It must minimize the number of judgments required for any specific pair since each such comparison requires an independent assessor. Since the theoretical guarantees provided by most algorithms depend on assumptions that are not satisfied by human preference judgments, I simulate selected algorithms on representative test cases to provide insight into their practical utility. In contrast to the previous paper presented at SIGIR 2022 [87], I include more theoretical analysis and experimental results in this work. Based on the simulations, two algorithms stand out for their potential. I proceed with the method of Clarke et al. [20], and the simulations suggest modifications to further improve its performance. Using the modified algorithm, over 10,000 preference judgments for pools derived from submissions to the TREC 2021 Deep Learning Track are collected, confirming its suitability. We test the idea of best-item evaluation and suggest ideas for further theoretical and practical progress

    One Arrow, Two Kills: An Unified Framework for Achieving Optimal Regret Guarantees in Sleeping Bandits

    Get PDF
    We address the problem of \emph{`Internal Regret'} in \emph{Sleeping Bandits} in the fully adversarial setup, as well as draw connections between different existing notions of sleeping regrets in the multiarmed bandits (MAB) literature and consequently analyze the implications: Our first contribution is to propose the new notion of \emph{Internal Regret} for sleeping MAB. We then proposed an algorithm that yields sublinear regret in that measure, even for a completely adversarial sequence of losses and availabilities. We further show that a low sleeping internal regret always implies a low external regret, and as well as a low policy regret for iid sequence of losses. The main contribution of this work precisely lies in unifying different notions of existing regret in sleeping bandits and understand the implication of one to another. Finally, we also extend our results to the setting of \emph{Dueling Bandits} (DB)--a preference feedback variant of MAB, and proposed a reduction to MAB idea to design a low regret algorithm for sleeping dueling bandits with stochastic preferences and adversarial availabilities. The efficacy of our algorithms is justified through empirical evaluations

    Bowdoin College Catalogue and Academic Handbook (2022-2023)

    Get PDF
    https://digitalcommons.bowdoin.edu/course-catalogues/1320/thumbnail.jp

    A Unified Recommendation Framework for Data-driven, People-centric Smart Home Applications

    Full text link
    With the rapid growth in the number of things that can be connected to the internet, Recommendation Systems for the IoT (RSIoT) have become more significant in helping a variety of applications to meet user preferences, and such applications can be smart home, smart tourism, smart parking, m-health and so on. In this thesis, we propose a unified recommendation framework for data-driven, people-centric smart home applications. The framework involves three main stages: complex activity detection, constructing recommendations in timely manner, and insuring the data integrity. First, we review the latest state-of-the-art recommendations methods and development of applications for recommender system in the IoT so, as to form an overview of the current research progress. Challenges of using IoT for recommendation systems are introduced and explained. A reference framework to compare the existing studies and guide future research and practices is provided. In order to meet the requirements of complex activity detection that helps our system to understand what activity or activities our user is undertaking in relatively high level. We provide adequate resources to be fit for the recommender system. Furthermore, we consider two inherent challenges of RSIoT, that is, capturing dynamicity patterns of human activities and system update without a focus on user feedback. Based on these, we design a Reminder Care System (RCS) which harnesses the advantages of deep reinforcement learning (DQN) to further address these challenges. Then we utilize a contextual bandit approach for improving the quality of recommendations by considering the context as an input. We aim to address not only the two previous challenges of RSIoT but also to learn the best action in different scenarios and treat each state independently. Last but not least, we utilize a blockchain technology to ensure the safety of data storage in addition to decentralized feature. In the last part, we discuss a few open issues and provide some insights for future directions

    One Arrow, Two Kills: An Unified Framework for Achieving Optimal Regret Guarantees in Sleeping Bandits

    Get PDF
    We address the problem of `Internal Regret' in Sleeping Bandits in the fully adversarial setup, as well as draw connections between different existing notions of sleeping regrets in the multiarmed bandits (MAB) literature and consequently analyze the implications: Our first contribution is to propose the new notion of Internal Regret for sleeping MAB. We then proposed an algorithm that yields sublinear regret in that measure, even for a completely adversarial sequence of losses and availabilities. We further show that a low sleeping internal regret always implies a low external regret, and as well as a low policy regret for iid sequence of losses. The main contribution of this work precisely lies in unifying different notions of existing regret in sleeping bandits and understand the implication of one to another. Finally, we also extend our results to the setting of Dueling Bandits (DB)--a preference feedback variant of MAB, and proposed a reduction to MAB idea to design a low regret algorithm for sleeping dueling bandits with stochastic preferences and adversarial availabilities. The efficacy of our algorithms is justified through empirical evaluations

    Versatile Dueling Bandits: Best-of-both-World Analyses for Online Learning from Preferences

    Get PDF
    International audienceWe study the problem of KK-armed dueling bandit for both stochastic and adversarial environments, where the goal of the learner is to aggregate information through relative preferences of pair of decisions points queried in an online sequential manner. We first propose a novel reduction from any (general) dueling bandits to multi-armed bandits and despite the simplicity, it allows us to improve many existing results in dueling bandits. In particular, \emph{we give the first best-of-both world result for the dueling bandits regret minimization problem} -- a unified framework that is guaranteed to perform optimally for both stochastic and adversarial preferences simultaneously. Moreover, our algorithm is also the first to achieve an optimal O(i=1KlogTΔi)O(\sum_{i = 1}^K \frac{\log T}{\Delta_i}) regret bound against the Condorcet-winner benchmark, which scales optimally both in terms of the arm-size KK and the instance-specific suboptimality gaps {Δi}i=1K\{\Delta_i\}_{i = 1}^K. This resolves the long-standing problem of designing an instancewise gap-dependent order optimal regret algorithm for dueling bandits (with matching lower bounds up to small constant factors). We further justify the robustness of our proposed algorithm by proving its optimal regret rate under adversarially corrupted preferences -- this outperforms the existing state-of-the-art corrupted dueling results by a large margin. In summary, we believe our reduction idea will find a broader scope in solving a diverse class of dueling bandits setting, which are otherwise studied separately from multi-armed bandits with often more complex solutions and worse guarantees. The efficacy of our proposed algorithms is empirically corroborated against the existing dueling bandit methods

    Dueling Bandits with Adversarial Sleeping

    Get PDF
    We introduce the problem of sleeping dueling bandits with stochastic preferences and adversarial availabilities (DB-SPAA). In almost all dueling bandit applications, the decision space often changes over time; eg, retail store management, online shopping, restaurant recommendation, search engine optimization, etc. Surprisingly, this `sleeping aspect' of dueling bandits has never been studied in the literature. Like dueling bandits, the goal is to compete with the best arm by sequentially querying the preference feedback of item pairs. The non-triviality however results due to the non-stationary item spaces that allow any arbitrary subsets items to go unavailable every round. The goal is to find an optimal `no-regret' policy that can identify the best available item at each round, as opposed to the standard `fixed best-arm regret objective' of dueling bandits. We first derive an instance-specific lower bound for DB-SPAA Ω(i=1K1j=i+1KlogTΔ(i,j))\Omega( \sum_{i =1}^{K-1}\sum_{j=i+1}^K \frac{\log T}{\Delta(i,j)}), where KK is the number of items and Δ(i,j)\Delta(i,j) is the gap between items ii and jj. This indicates that the sleeping problem with preference feedback is inherently more difficult than that for classical multi-armed bandits (MAB). We then propose two algorithms, with near optimal regret guarantees. Our results are corroborated empirically
    corecore