1,272 research outputs found

    Distributed Diffusion-based LMS for Node-Specific Parameter Estimation over Adaptive Networks

    Full text link
    A distributed adaptive algorithm is proposed to solve a node-specific parameter estimation problem where nodes are interested in estimating parameters of local interest and parameters of global interest to the whole network. To address the different node-specific parameter estimation problems, this novel algorithm relies on a diffusion-based implementation of different Least Mean Squares (LMS) algorithms, each associated with the estimation of a specific set of local or global parameters. Although all the different LMS algorithms are coupled, the diffusion-based implementation of each LMS algorithm is exclusively undertaken by the nodes of the network interested in a specific set of local or global parameters. To illustrate the effectiveness of the proposed technique we provide simulation results in the context of cooperative spectrum sensing in cognitive radio networks.Comment: 5 pages, 2 figures, Published in Proc. IEEE ICASSP, Florence, Italy, May 201

    Distributed Diffusion-Based LMS for Node-Specific Adaptive Parameter Estimation

    Full text link
    A distributed adaptive algorithm is proposed to solve a node-specific parameter estimation problem where nodes are interested in estimating parameters of local interest, parameters of common interest to a subset of nodes and parameters of global interest to the whole network. To address the different node-specific parameter estimation problems, this novel algorithm relies on a diffusion-based implementation of different Least Mean Squares (LMS) algorithms, each associated with the estimation of a specific set of local, common or global parameters. Coupled with the estimation of the different sets of parameters, the implementation of each LMS algorithm is only undertaken by the nodes of the network interested in a specific set of local, common or global parameters. The study of convergence in the mean sense reveals that the proposed algorithm is asymptotically unbiased. Moreover, a spatial-temporal energy conservation relation is provided to evaluate the steady-state performance at each node in the mean-square sense. Finally, the theoretical results and the effectiveness of the proposed technique are validated through computer simulations in the context of cooperative spectrum sensing in Cognitive Radio networks.Comment: 13 pages, 6 figure

    Proximal Multitask Learning over Networks with Sparsity-inducing Coregularization

    Full text link
    In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm for solving this problem. The approach relies on minimizing a global mean-square error criterion regularized by non-differentiable terms to promote cooperation among neighboring clusters. A general diffusion forward-backward splitting strategy is introduced. Then, it is specialized to the case of sparsity promoting regularizers. A closed-form expression for the proximal operator of a weighted sum of â„“1\ell_1-norms is derived to achieve higher efficiency. We also provide conditions on the step-sizes that ensure convergence of the algorithm in the mean and mean-square error sense. Simulations are conducted to illustrate the effectiveness of the strategy

    A Multitask Diffusion Strategy with Optimized Inter-Cluster Cooperation

    Full text link
    We consider a multitask estimation problem where nodes in a network are divided into several connected clusters, with each cluster performing a least-mean-squares estimation of a different random parameter vector. Inspired by the adapt-then-combine diffusion strategy, we propose a multitask diffusion strategy whose mean stability can be ensured whenever individual nodes are stable in the mean, regardless of the inter-cluster cooperation weights. In addition, the proposed strategy is able to achieve an asymptotically unbiased estimation, when the parameters have same mean. We also develop an inter-cluster cooperation weights selection scheme that allows each node in the network to locally optimize its inter-cluster cooperation weights. Numerical results demonstrate that our approach leads to a lower average steady-state network mean-square deviation, compared with using weights selected by various other commonly adopted methods in the literature.Comment: 30 pages, 8 figures, submitted to IEEE Journal of Selected Topics in Signal Processin
    • …
    corecore