838 research outputs found
-Learning: A Collaborative Distributed Strategy for Multi-Agent Reinforcement Learning Through Consensus + Innovations
The paper considers a class of multi-agent Markov decision processes (MDPs),
in which the network agents respond differently (as manifested by the
instantaneous one-stage random costs) to a global controlled state and the
control actions of a remote controller. The paper investigates a distributed
reinforcement learning setup with no prior information on the global state
transition and local agent cost statistics. Specifically, with the agents'
objective consisting of minimizing a network-averaged infinite horizon
discounted cost, the paper proposes a distributed version of -learning,
-learning, in which the network agents collaborate by means of
local processing and mutual information exchange over a sparse (possibly
stochastic) communication network to achieve the network goal. Under the
assumption that each agent is only aware of its local online cost data and the
inter-agent communication network is \emph{weakly} connected, the proposed
distributed scheme is almost surely (a.s.) shown to yield asymptotically the
desired value function and the optimal stationary control policy at each
network agent. The analytical techniques developed in the paper to address the
mixed time-scale stochastic dynamics of the \emph{consensus + innovations}
form, which arise as a result of the proposed interactive distributed scheme,
are of independent interest.Comment: Submitted to the IEEE Transactions on Signal Processing, 33 page
Multi-armed bandit problem with precedence relations
Consider a multi-phase project management problem where the decision maker
needs to deal with two issues: (a) how to allocate resources to projects within
each phase, and (b) when to enter the next phase, so that the total expected
reward is as large as possible. We formulate the problem as a multi-armed
bandit problem with precedence relations. In Chan, Fuh and Hu (2005), a class
of asymptotically optimal arm-pulling strategies is constructed to minimize the
shortfall from perfect information payoff. Here we further explore optimality
properties of the proposed strategies. First, we show that the efficiency
benchmark, which is given by the regret lower bound, reduces to those in Lai
and Robbins (1985), Hu and Wei (1989), and Fuh and Hu (2000). This implies that
the proposed strategy is also optimal under the settings of aforementioned
papers. Secondly, we establish the super-efficiency of proposed strategies when
the bad set is empty. Thirdly, we show that they are still optimal with
constant switching cost between arms. In addition, we prove that the Wald's
equation holds for Markov chains under Harris recurrent condition, which is an
important tool in studying the efficiency of the proposed strategies.Comment: Published at http://dx.doi.org/10.1214/074921706000001067 in the IMS
Lecture Notes Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling
A new amortized variance-reduced gradient (AVRG) algorithm was developed in
\cite{ying2017convergence}, which has constant storage requirement in
comparison to SAGA and balanced gradient computations in comparison to SVRG.
One key advantage of the AVRG strategy is its amenability to decentralized
implementations. In this work, we show how AVRG can be extended to the network
case where multiple learning agents are assumed to be connected by a graph
topology. In this scenario, each agent observes data that is spatially
distributed and all agents are only allowed to communicate with direct
neighbors. Moreover, the amount of data observed by the individual agents may
differ drastically. For such situations, the balanced gradient computation
property of AVRG becomes a real advantage in reducing idle time caused by
unbalanced local data storage requirements, which is characteristic of other
reduced-variance gradient algorithms. The resulting diffusion-AVRG algorithm is
shown to have linear convergence to the exact solution, and is much more memory
efficient than other alternative algorithms. In addition, we propose a
mini-batch strategy to balance the communication and computation efficiency for
diffusion-AVRG. When a proper batch size is employed, it is observed in
simulations that diffusion-AVRG is more computationally efficient than exact
diffusion or EXTRA while maintaining almost the same communication efficiency.Comment: 23 pages, 12 figures, submitted for publicatio
- …