6 research outputs found
Learning Contextual Bandits in a Non-stationary Environment
Multi-armed bandit algorithms have become a reference solution for handling
the explore/exploit dilemma in recommender systems, and many other important
real-world problems, such as display advertisement. However, such algorithms
usually assume a stationary reward distribution, which hardly holds in practice
as users' preferences are dynamic. This inevitably costs a recommender system
consistent suboptimal performance. In this paper, we consider the situation
where the underlying distribution of reward remains unchanged over (possibly
short) epochs and shifts at unknown time instants. In accordance, we propose a
contextual bandit algorithm that detects possible changes of environment based
on its reward estimation confidence and updates its arm selection strategy
respectively. Rigorous upper regret bound analysis of the proposed algorithm
demonstrates its learning effectiveness in such a non-trivial environment.
Extensive empirical evaluations on both synthetic and real-world datasets for
recommendation confirm its practical utility in a changing environment.Comment: 10 pages, 13 figures, To appear on ACM Special Interest Group on
Information Retrieval (SIGIR) 201
Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests
A contextual bandit problem is studied in a highly non-stationary
environment, which is ubiquitous in various recommender systems due to the
time-varying interests of users. Two models with disjoint and hybrid payoffs
are considered to characterize the phenomenon that users' preferences towards
different items vary differently over time. In the disjoint payoff model, the
reward of playing an arm is determined by an arm-specific preference vector,
which is piecewise-stationary with asynchronous and distinct changes across
different arms. An efficient learning algorithm that is adaptive to abrupt
reward changes is proposed and theoretical regret analysis is provided to show
that a sublinear scaling of regret in the time length is achieved. The
algorithm is further extended to a more general setting with hybrid payoffs
where the reward of playing an arm is determined by both an arm-specific
preference vector and a joint coefficient vector shared by all arms. Empirical
experiments are conducted on real-world datasets to verify the advantages of
the proposed learning algorithms against baseline ones in both settings.Comment: Accepted by AAAI 2
Non-stationary Delayed Combinatorial Semi-Bandit with Causally Related Rewards
Sequential decision-making under uncertainty is often associated with long
feedback delays. Such delays degrade the performance of the learning agent in
identifying a subset of arms with the optimal collective reward in the long
run. This problem becomes significantly challenging in a non-stationary
environment with structural dependencies amongst the reward distributions
associated with the arms. Therefore, besides adapting to delays and
environmental changes, learning the causal relations alleviates the adverse
effects of feedback delay on the decision-making process. We formalize the
described setting as a non-stationary and delayed combinatorial semi-bandit
problem with causally related rewards. We model the causal relations by a
directed graph in a stationary structural equation model. The agent maximizes
the long-term average payoff, defined as a linear function of the base arms'
rewards. We develop a policy that learns the structural dependencies from
delayed feedback and utilizes that to optimize the decision-making while
adapting to drifts. We prove a regret bound for the performance of the proposed
algorithm. Besides, we evaluate our method via numerical analysis using
synthetic and real-world datasets to detect the regions that contribute the
most to the spread of Covid-19 in Italy.Comment: 33 pages, 9 figures. arXiv admin note: text overlap with
arXiv:2212.1292