1,119 research outputs found
Distributed Learning with Infinitely Many Hypotheses
We consider a distributed learning setup where a network of agents
sequentially access realizations of a set of random variables with unknown
distributions. The network objective is to find a parametrized distribution
that best describes their joint observations in the sense of the
Kullback-Leibler divergence. Apart from recent efforts in the literature, we
analyze the case of countably many hypotheses and the case of a continuum of
hypotheses. We provide non-asymptotic bounds for the concentration rate of the
agents' beliefs around the correct hypothesis in terms of the number of agents,
the network parameters, and the learning abilities of the agents. Additionally,
we provide a novel motivation for a general set of distributed Non-Bayesian
update rules as instances of the distributed stochastic mirror descent
algorithm.Comment: Submitted to CDC201
Improved Convergence Rates for Distributed Resource Allocation
In this paper, we develop a class of decentralized algorithms for solving a
convex resource allocation problem in a network of agents, where the agent
objectives are decoupled while the resource constraints are coupled. The agents
communicate over a connected undirected graph, and they want to collaboratively
determine a solution to the overall network problem, while each agent only
communicates with its neighbors. We first study the connection between the
decentralized resource allocation problem and the decentralized consensus
optimization problem. Then, using a class of algorithms for solving consensus
optimization problems, we propose a novel class of decentralized schemes for
solving resource allocation problems in a distributed manner. Specifically, we
first propose an algorithm for solving the resource allocation problem with an
convergence rate guarantee when the agents' objective functions are
generally convex (could be nondifferentiable) and per agent local convex
constraints are allowed; We then propose a gradient-based algorithm for solving
the resource allocation problem when per agent local constraints are absent and
show that such scheme can achieve geometric rate when the objective functions
are strongly convex and have Lipschitz continuous gradients. We have also
provided scalability/network dependency analysis. Based on these two
algorithms, we have further proposed a gradient projection-based algorithm
which can handle smooth objective and simple constraints more efficiently.
Numerical experiments demonstrates the viability and performance of all the
proposed algorithms
Constrained Consensus
We present distributed algorithms that can be used by multiple agents to
align their estimates with a particular value over a network with time-varying
connectivity. Our framework is general in that this value can represent a
consensus value among multiple agents or an optimal solution of an optimization
problem, where the global objective function is a combination of local agent
objective functions. Our main focus is on constrained problems where the
estimate of each agent is restricted to lie in a different constraint set.
To highlight the effects of constraints, we first consider a constrained
consensus problem and present a distributed ``projected consensus algorithm''
in which agents combine their local averaging operation with projection on
their individual constraint sets. This algorithm can be viewed as a version of
an alternating projection method with weights that are varying over time and
across agents. We establish convergence and convergence rate results for the
projected consensus algorithm. We next study a constrained optimization problem
for optimizing the sum of local objective functions of the agents subject to
the intersection of their local constraint sets. We present a distributed
``projected subgradient algorithm'' which involves each agent performing a
local averaging operation, taking a subgradient step to minimize its own
objective function, and projecting on its constraint set. We show that, with an
appropriately selected stepsize rule, the agent estimates generated by this
algorithm converge to the same optimal solution for the cases when the weights
are constant and equal, and when the weights are time-varying but all agents
have the same constraint set.Comment: 35 pages. Included additional results, removed two subsections, added
references, fixed typo
- …