2,608 research outputs found

    Distributed Stochastic Optimization over Time-Varying Noisy Network

    Full text link
    This paper is concerned with distributed stochastic multi-agent optimization problem over a class of time-varying network with slowly decreasing communication noise effects. This paper considers the problem in composite optimization setting which is more general in noisy network optimization. It is noteworthy that existing methods for noisy network optimization are Euclidean projection based. We present two related different classes of non-Euclidean methods and investigate their convergence behavior. One is distributed stochastic composite mirror descent type method (DSCMD-N) which provides a more general algorithm framework than former works in this literature. As a counterpart, we also consider a composite dual averaging type method (DSCDA-N) for noisy network optimization. Some main error bounds for DSCMD-N and DSCDA-N are obtained. The trade-off among stepsizes, noise decreasing rates, convergence rates of algorithm is analyzed in detail. To the best of our knowledge, this is the first work to analyze and derive convergence rates of optimization algorithm in noisy network optimization. We show that an optimal rate of O(1/T)O(1/\sqrt{T}) in nonsmooth convex optimization can be obtained for proposed methods under appropriate communication noise condition. Moveover, convergence rates in different orders are comprehensively derived in both expectation convergence and high probability convergence sense.Comment: 27 page

    Distributed Learning with Infinitely Many Hypotheses

    Full text link
    We consider a distributed learning setup where a network of agents sequentially access realizations of a set of random variables with unknown distributions. The network objective is to find a parametrized distribution that best describes their joint observations in the sense of the Kullback-Leibler divergence. Apart from recent efforts in the literature, we analyze the case of countably many hypotheses and the case of a continuum of hypotheses. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs around the correct hypothesis in terms of the number of agents, the network parameters, and the learning abilities of the agents. Additionally, we provide a novel motivation for a general set of distributed Non-Bayesian update rules as instances of the distributed stochastic mirror descent algorithm.Comment: Submitted to CDC201
    • …
    corecore