215 research outputs found
Scalable Multiagent Coordination with Distributed Online Open Loop Planning
We propose distributed online open loop planning (DOOLP), a general framework
for online multiagent coordination and decision making under uncertainty. DOOLP
is based on online heuristic search in the space defined by a generative model
of the domain dynamics, which is exploited by agents to simulate and evaluate
the consequences of their potential choices.
We also propose distributed online Thompson sampling (DOTS) as an effective
instantiation of the DOOLP framework. DOTS models sequences of agent choices by
concatenating a number of multiarmed bandits for each agent and uses Thompson
sampling for dealing with action value uncertainty. The Bayesian approach
underlying Thompson sampling allows to effectively model and estimate
uncertainty about (a) own action values and (b) other agents' behavior. This
approach yields a principled and statistically sound solution to the
exploration-exploitation dilemma when exploring large search spaces with
limited resources.
We implemented DOTS in a smart factory case study with positive empirical
results. We observed effective, robust and scalable planning and coordination
capabilities even when only searching a fraction of the potential search space
Heterogeneous Stochastic Interactions for Multiple Agents in a Multi-armed Bandit Problem
We define and analyze a multi-agent multi-armed bandit problem in which
decision-making agents can observe the choices and rewards of their neighbors.
Neighbors are defined by a network graph with heterogeneous and stochastic
interconnections. These interactions are determined by the sociability of each
agent, which corresponds to the probability that the agent observes its
neighbors. We design an algorithm for each agent to maximize its own expected
cumulative reward and prove performance bounds that depend on the sociability
of the agents and the network structure. We use the bounds to predict the rank
ordering of agents according to their performance and verify the accuracy
analytically and computationally
Decentralized Cooperative Stochastic Bandits
We study a decentralized cooperative stochastic multi-armed bandit problem
with arms on a network of agents. In our model, the reward distribution
of each arm is the same for each agent and rewards are drawn independently
across agents and time steps. In each round, each agent chooses an arm to play
and subsequently sends a message to her neighbors. The goal is to minimize the
overall regret of the entire network. We design a fully decentralized algorithm
that uses an accelerated consensus procedure to compute (delayed) estimates of
the average of rewards obtained by all the agents for each arm, and then uses
an upper confidence bound (UCB) algorithm that accounts for the delay and error
of the estimates. We analyze the regret of our algorithm and also provide a
lower bound. The regret is bounded by the optimal centralized regret plus a
natural and simple term depending on the spectral gap of the communication
matrix. Our algorithm is simpler to analyze than those proposed in prior work
and it achieves better regret bounds, while requiring less information about
the underlying network. It also performs better empirically
- …