6 research outputs found

    A Multitask Diffusion Strategy with Optimized Inter-Cluster Cooperation

    Full text link
    We consider a multitask estimation problem where nodes in a network are divided into several connected clusters, with each cluster performing a least-mean-squares estimation of a different random parameter vector. Inspired by the adapt-then-combine diffusion strategy, we propose a multitask diffusion strategy whose mean stability can be ensured whenever individual nodes are stable in the mean, regardless of the inter-cluster cooperation weights. In addition, the proposed strategy is able to achieve an asymptotically unbiased estimation, when the parameters have same mean. We also develop an inter-cluster cooperation weights selection scheme that allows each node in the network to locally optimize its inter-cluster cooperation weights. Numerical results demonstrate that our approach leads to a lower average steady-state network mean-square deviation, compared with using weights selected by various other commonly adopted methods in the literature.Comment: 30 pages, 8 figures, submitted to IEEE Journal of Selected Topics in Signal Processin

    An Event-based Diffusion LMS Strategy

    Full text link
    We consider a wireless sensor network consists of cooperative nodes, each of them keep adapting to streaming data to perform a least-mean-squares estimation, and also maintain information exchange among neighboring nodes in order to improve performance. For the sake of reducing communication overhead, prolonging batter life while preserving the benefits of diffusion cooperation, we propose an energy-efficient diffusion strategy that adopts an event-based communication mechanism, which allow nodes to cooperate with neighbors only when necessary. We also study the performance of the proposed algorithm, and show that its network mean error and MSD are bounded in steady state. Numerical results demonstrate that the proposed method can effectively reduce the network energy consumption without sacrificing steady-state network MSD performance significantly

    Proximal Multitask Learning over Networks with Sparsity-inducing Coregularization

    Full text link
    In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm for solving this problem. The approach relies on minimizing a global mean-square error criterion regularized by non-differentiable terms to promote cooperation among neighboring clusters. A general diffusion forward-backward splitting strategy is introduced. Then, it is specialized to the case of sparsity promoting regularizers. A closed-form expression for the proximal operator of a weighted sum of â„“1\ell_1-norms is derived to achieve higher efficiency. We also provide conditions on the step-sizes that ensure convergence of the algorithm in the mean and mean-square error sense. Simulations are conducted to illustrate the effectiveness of the strategy

    Distributed Learning for Stochastic Generalized Nash Equilibrium Problems

    Full text link
    This work examines a stochastic formulation of the generalized Nash equilibrium problem (GNEP) where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully-distributed online learning by agents and employ penalized individual cost functions to deal with coupled constraints. Three stochastic gradient strategies are developed with constant step-sizes. We allow the agents to use heterogeneous step-sizes and show that the penalty solution is able to approach the Nash equilibrium in a stable manner within O(ÎĽmax)O(\mu_\text{max}), for small step-size value ÎĽmax\mu_\text{max} and sufficiently large penalty parameters. The operation of the algorithm is illustrated by considering the network Cournot competition problem

    Multitask Diffusion Adaptation Over Asynchronous Networks

    No full text
    corecore