350 research outputs found
Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests
A contextual bandit problem is studied in a highly non-stationary
environment, which is ubiquitous in various recommender systems due to the
time-varying interests of users. Two models with disjoint and hybrid payoffs
are considered to characterize the phenomenon that users' preferences towards
different items vary differently over time. In the disjoint payoff model, the
reward of playing an arm is determined by an arm-specific preference vector,
which is piecewise-stationary with asynchronous and distinct changes across
different arms. An efficient learning algorithm that is adaptive to abrupt
reward changes is proposed and theoretical regret analysis is provided to show
that a sublinear scaling of regret in the time length is achieved. The
algorithm is further extended to a more general setting with hybrid payoffs
where the reward of playing an arm is determined by both an arm-specific
preference vector and a joint coefficient vector shared by all arms. Empirical
experiments are conducted on real-world datasets to verify the advantages of
the proposed learning algorithms against baseline ones in both settings.Comment: Accepted by AAAI 2
A Gang of Adversarial Bandits
We consider running multiple instances of multi-armed bandit (MAB) problems in parallel. A main motivation for this study are online recommendation systems, in which each of N users is associated with a MAB problem and the goal is to exploit users' similarity in order to learn users' preferences to K items more efficiently. We consider the adversarial MAB setting, whereby an adversary is free to choose which user and which loss to present to the learner during the learning process. Users are in a social network and the learner is aided by a-priori knowledge of the strengths of the social links between all pairs of users. It is assumed that if the social link between two users is strong then they tend to share the same action. The regret is measured relative to an arbitrary function which maps users to actions. The smoothness of the function is captured by a resistance-based dispersion measure Ψ. We present two learning algorithms, GABA-I and GABA-II which exploit the network structure to bias towards functions of low Ψ values. We show that GABA-I has an expected regret bound of O(pln(N K/Ψ)ΨKT) and per-trial time complexity of O(K ln(N)), whilst GABA-II has a weaker O(pln(N/Ψ) ln(N K/Ψ)ΨKT) regret, but a better O(ln(K) ln(N)) per-trial time complexity. We highlight improvements of both algorithms over running independent standard MABs across users
Graph Neural Bandits
Contextual bandits algorithms aim to choose the optimal arm with the highest
reward out of a set of candidates based on the contextual information. Various
bandit algorithms have been applied to real-world applications due to their
ability of tackling the exploitation-exploration dilemma. Motivated by online
recommendation scenarios, in this paper, we propose a framework named Graph
Neural Bandits (GNB) to leverage the collaborative nature among users empowered
by graph neural networks (GNNs). Instead of estimating rigid user clusters as
in existing works, we model the "fine-grained" collaborative effects through
estimated user graphs in terms of exploitation and exploration respectively.
Then, to refine the recommendation strategy, we utilize separate GNN-based
models on estimated user graphs for exploitation and adaptive exploration.
Theoretical analysis and experimental results on multiple real data sets in
comparison with state-of-the-art baselines are provided to demonstrate the
effectiveness of our proposed framework.Comment: Accepted to SIGKDD 202
The art of clustering bandits.
Multi-armed bandit problems are receiving a great deal of attention because they adequately formalize the exploration-exploitation trade-offs arising in several industrially relevant applications, such as online advertisement and, more generally, recommendation systems. In many cases, however, these applications have a strong social component, whose integration in the bandit algorithms could lead to a dramatic performance increase. For instance, we may want to serve content to a group of users by taking advantage of an underlying network of social relationships among them. The purpose of this thesis is to introduce novel and principled algorithmic approaches to the solution of such networked bandit problems. Starting from a global (Laplacian-based) strategy which allocates a bandit algorithm to each network node (user), and allows it to "share" signals (contexts and payoffs) with the neghboring nodes, our goal is to derive and experimentally test more scalable approaches based on different ways of clustering the graph nodes. More importantly, we shall investigate the case when the graph structure is not given ahead of time, and has to be inferred based on past user behavior. A general difficulty arising in such practical scenarios is that data sequences are typically nonstationary, implying that traditional statistical inference methods should be used cautiously, possibly replacing them with by more robust nonstochastic (e.g., game-theoretic) inference methods.
In this thesis, we will firstly introduce the centralized clustering bandits. Then, we propose the corresponding solution in decentralized scenario. After that, we explain the generic collaborative clustering bandits. Finally, we extend and showcase the state-of-the-art clustering bandits that we developed in the quantification problem
Result Diversification in Search and Recommendation: A Survey
Diversifying return results is an important research topic in retrieval
systems in order to satisfy both the various interests of customers and the
equal market exposure of providers. There has been growing attention on
diversity-aware research during recent years, accompanied by a proliferation of
literature on methods to promote diversity in search and recommendation.
However, diversity-aware studies in retrieval systems lack a systematic
organization and are rather fragmented. In this survey, we are the first to
propose a unified taxonomy for classifying the metrics and approaches of
diversification in both search and recommendation, which are two of the most
extensively researched fields of retrieval systems. We begin the survey with a
brief discussion of why diversity is important in retrieval systems, followed
by a summary of the various diversity concerns in search and recommendation,
highlighting their relationship and differences. For the survey's main body, we
present a unified taxonomy of diversification metrics and approaches in
retrieval systems, from both the search and recommendation perspectives. In the
later part of the survey, we discuss the open research questions of
diversity-aware research in search and recommendation in an effort to inspire
future innovations and encourage the implementation of diversity in real-world
systems.Comment: 20 page
- …