468 research outputs found

    A Graph-based Bandit Algorithm for Maximum User Coverage in Online Recommendation Systems

    Get PDF
    We study a type of recommendation systems problem, in which the system must be able to cover as many users’ tastes as possible while users’ tastes change over time. This problem can be viewed as a variation of the maximum coverage problem, where the number of sets and elements within any sets can change dynamically. When the number of distinctive elements is large, an exhaustive search for even a ïŹxed number of elements is known to be computationally expensive. Many known algorithms tend to have exponential growth in complexity. We propose a novel graph based UCB1 algorithm that eïŹ€ectively minimizes the number of elements to consider, thereby reducing the search space greatly. The algorithm utilizes a new rewarding scheme to choose items that satisfy more user types as it construct a relational graph between items to choose. Experiments show that the new algorithm performs better than existing techniques such as Ranked Bandits [17] and Independent Bandits [12] in terms of satisfying diverse types of users while minimizing computational complexity

    ONLINE LEARNING WITH BANDITS FOR COVERAGE

    Get PDF
    With the rapid growth in velocity and volume, streaming data compels decision support systems to predict a small number of unique data points in due time that can represent a massive amount of correlated data without much loss of precision. In this work, we formulate this problem as the {\it online set coverage problem} and propose its solution for recommendation systems and the patrol assignment problem. We propose a novel online reinforcement learning algorithm inspired by the Multi-Armed Bandit problem to solve the online recommendation system problem. We introduce a graph-based mechanism to improve the user coverage by recommended items and show that the mechanism can facilitate the coordination between bandits and therefore, reduce the overall complexity. Our graph-based bandit algorithm can select a much smaller set of items to cover a vast variety of users’ choices for recommendation systems. We present our experimental results in a partially observable real-world environment. We also study the patrol assignment as an online set coverage problem, which presents an additional level of difficulty. Along with covering the susceptible routes by learning the diversity of attacks, unlike in recommendation systems, our technique needs to make choices against actively engaging adversarial opponents. We assume that attacks over those routes are posed by intelligent entities, capable of reacting with their best responses. Therefore, to model such attacks, we used the Stackelberg Security Game. We augment our graph-based bandit defenders with adaptive adjustment of reward coming from this game to perplex the attackers and gradually succeed over them by maximizing the confrontation. We found that our graph bandits can outperform other Multi-Arm bandit algorithms when a simulated annealing-based scheduling is incorporated to adjust the balance between exploration and exploitation

    Result Diversification in Search and Recommendation: A Survey

    Full text link
    Diversifying return results is an important research topic in retrieval systems in order to satisfy both the various interests of customers and the equal market exposure of providers. There has been growing attention on diversity-aware research during recent years, accompanied by a proliferation of literature on methods to promote diversity in search and recommendation. However, diversity-aware studies in retrieval systems lack a systematic organization and are rather fragmented. In this survey, we are the first to propose a unified taxonomy for classifying the metrics and approaches of diversification in both search and recommendation, which are two of the most extensively researched fields of retrieval systems. We begin the survey with a brief discussion of why diversity is important in retrieval systems, followed by a summary of the various diversity concerns in search and recommendation, highlighting their relationship and differences. For the survey's main body, we present a unified taxonomy of diversification metrics and approaches in retrieval systems, from both the search and recommendation perspectives. In the later part of the survey, we discuss the open research questions of diversity-aware research in search and recommendation in an effort to inspire future innovations and encourage the implementation of diversity in real-world systems.Comment: 20 page

    What Should We Teach in Information Retrieval?

    Get PDF
    • 

    corecore