125 research outputs found
A Gang of Adversarial Bandits
We consider running multiple instances of multi-armed bandit (MAB) problems in parallel. A main motivation for this study are online recommendation systems, in which each of N users is associated with a MAB problem and the goal is to exploit users' similarity in order to learn users' preferences to K items more efficiently. We consider the adversarial MAB setting, whereby an adversary is free to choose which user and which loss to present to the learner during the learning process. Users are in a social network and the learner is aided by a-priori knowledge of the strengths of the social links between all pairs of users. It is assumed that if the social link between two users is strong then they tend to share the same action. The regret is measured relative to an arbitrary function which maps users to actions. The smoothness of the function is captured by a resistance-based dispersion measure Ψ. We present two learning algorithms, GABA-I and GABA-II which exploit the network structure to bias towards functions of low Ψ values. We show that GABA-I has an expected regret bound of O(pln(N K/Ψ)ΨKT) and per-trial time complexity of O(K ln(N)), whilst GABA-II has a weaker O(pln(N/Ψ) ln(N K/Ψ)ΨKT) regret, but a better O(ln(K) ln(N)) per-trial time complexity. We highlight improvements of both algorithms over running independent standard MABs across users
Collaborative Learning of Stochastic Bandits over a Social Network
We consider a collaborative online learning paradigm, wherein a group of
agents connected through a social network are engaged in playing a stochastic
multi-armed bandit game. Each time an agent takes an action, the corresponding
reward is instantaneously observed by the agent, as well as its neighbours in
the social network. We perform a regret analysis of various policies in this
collaborative learning setting. A key finding of this paper is that natural
extensions of widely-studied single agent learning policies to the network
setting need not perform well in terms of regret. In particular, we identify a
class of non-altruistic and individually consistent policies, and argue by
deriving regret lower bounds that they are liable to suffer a large regret in
the networked setting. We also show that the learning performance can be
substantially improved if the agents exploit the structure of the network, and
develop a simple learning algorithm based on dominating sets of the network.
Specifically, we first consider a star network, which is a common motif in
hierarchical social networks, and show analytically that the hub agent can be
used as an information sink to expedite learning and improve the overall
regret. We also derive networkwide regret bounds for the algorithm applied to
general networks. We conduct numerical experiments on a variety of networks to
corroborate our analytical results.Comment: 14 Pages, 6 Figure
Distributed Consensus Algorithm for Decision-Making in Multi-agent Multi-armed Bandit
We study a structured multi-agent multi-armed bandit (MAMAB) problem in a
dynamic environment. A graph reflects the information-sharing structure among
agents, and the arms' reward distributions are piecewise-stationary with
several unknown change points. The agents face the identical
piecewise-stationary MAB problem. The goal is to develop a decision-making
policy for the agents that minimizes the regret, which is the expected total
loss of not playing the optimal arm at each time step. Our proposed solution,
Restarted Bayesian Online Change Point Detection in Cooperative Upper
Confidence Bound Algorithm (RBO-Coop-UCB), involves an efficient multi-agent
UCB algorithm as its core enhanced with a Bayesian change point detector. We
also develop a simple restart decision cooperation that improves
decision-making. Theoretically, we establish that the expected group regret of
RBO-Coop-UCB is upper bounded by ,
where K is the number of agents, M is the number of arms, and T is the number
of time steps. Numerical experiments on synthetic and real-world datasets
demonstrate that our proposed method outperforms the state-of-the-art
algorithms
Non-stationary Delayed Combinatorial Semi-Bandit with Causally Related Rewards
Sequential decision-making under uncertainty is often associated with long
feedback delays. Such delays degrade the performance of the learning agent in
identifying a subset of arms with the optimal collective reward in the long
run. This problem becomes significantly challenging in a non-stationary
environment with structural dependencies amongst the reward distributions
associated with the arms. Therefore, besides adapting to delays and
environmental changes, learning the causal relations alleviates the adverse
effects of feedback delay on the decision-making process. We formalize the
described setting as a non-stationary and delayed combinatorial semi-bandit
problem with causally related rewards. We model the causal relations by a
directed graph in a stationary structural equation model. The agent maximizes
the long-term average payoff, defined as a linear function of the base arms'
rewards. We develop a policy that learns the structural dependencies from
delayed feedback and utilizes that to optimize the decision-making while
adapting to drifts. We prove a regret bound for the performance of the proposed
algorithm. Besides, we evaluate our method via numerical analysis using
synthetic and real-world datasets to detect the regions that contribute the
most to the spread of Covid-19 in Italy.Comment: 33 pages, 9 figures. arXiv admin note: text overlap with
arXiv:2212.1292
- …