1,761 research outputs found

    Sparse Distributed Learning Based on Diffusion Adaptation

    Full text link
    This article proposes diffusion LMS strategies for distributed estimation over adaptive networks that are able to exploit sparsity in the underlying system model. The approach relies on convex regularization, common in compressive sensing, to enhance the detection of sparsity via a diffusive process over the network. The resulting algorithms endow networks with learning abilities and allow them to learn the sparse structure from the incoming data in real-time, and also to track variations in the sparsity of the model. We provide convergence and mean-square performance analysis of the proposed method and show under what conditions it outperforms the unregularized diffusion version. We also show how to adaptively select the regularization parameter. Simulation results illustrate the advantage of the proposed filters for sparse data recovery.Comment: to appear in IEEE Trans. on Signal Processing, 201

    A Multitask Diffusion Strategy with Optimized Inter-Cluster Cooperation

    Full text link
    We consider a multitask estimation problem where nodes in a network are divided into several connected clusters, with each cluster performing a least-mean-squares estimation of a different random parameter vector. Inspired by the adapt-then-combine diffusion strategy, we propose a multitask diffusion strategy whose mean stability can be ensured whenever individual nodes are stable in the mean, regardless of the inter-cluster cooperation weights. In addition, the proposed strategy is able to achieve an asymptotically unbiased estimation, when the parameters have same mean. We also develop an inter-cluster cooperation weights selection scheme that allows each node in the network to locally optimize its inter-cluster cooperation weights. Numerical results demonstrate that our approach leads to a lower average steady-state network mean-square deviation, compared with using weights selected by various other commonly adopted methods in the literature.Comment: 30 pages, 8 figures, submitted to IEEE Journal of Selected Topics in Signal Processin

    Distributed Coupled Multi-Agent Stochastic Optimization

    Full text link
    This work develops effective distributed strategies for the solution of constrained multi-agent stochastic optimization problems with coupled parameters across the agents. In this formulation, each agent is influenced by only a subset of the entries of a global parameter vector or model, and is subject to convex constraints that are only known locally. Problems of this type arise in several applications, most notably in disease propagation models, minimum-cost flow problems, distributed control formulations, and distributed power system monitoring. This work focuses on stochastic settings, where a stochastic risk function is associated with each agent and the objective is to seek the minimizer of the aggregate sum of all risks subject to a set of constraints. Agents are not aware of the statistical distribution of the data and, therefore, can only rely on stochastic approximations in their learning strategies. We derive an effective distributed learning strategy that is able to track drifts in the underlying parameter model. A detailed performance and stability analysis is carried out showing that the resulting coupled diffusion strategy converges at a linear rate to an O(μ)−O(\mu)-neighborhood of the true penalized optimizer

    Multitask Diffusion Adaptation over Networks

    Full text link
    Adaptive networks are suitable for decentralized inference tasks, e.g., to monitor complex natural phenomena. Recent research works have intensively studied distributed optimization problems in the case where the nodes have to estimate a single optimum parameter vector collaboratively. However, there are many important applications that are multitask-oriented in the sense that there are multiple optimum parameter vectors to be inferred simultaneously, in a collaborative manner, over the area covered by the network. In this paper, we employ diffusion strategies to develop distributed algorithms that address multitask problems by minimizing an appropriate mean-square error criterion with â„“2\ell_2-regularization. The stability and convergence of the algorithm in the mean and in the mean-square sense is analyzed. Simulations are conducted to verify the theoretical findings, and to illustrate how the distributed strategy can be used in several useful applications related to spectral sensing, target localization, and hyperspectral data unmixing.Comment: 29 pages, 11 figures, submitted for publicatio

    Adjustable dynamic range for paper reduction schemes in large-scale MIMO-OFDM systems

    Get PDF
    In a multi-input-multi-output (MIMO) communication system there is a necessity to limit the power that the output antenna amplifiers can deliver. Their signal is a combination of many independent channels, so the demanded amplitude can peak to many times the average value. The orthogonal frequency division multiplexing (OFDM) system causes high peak signals to occur because many subcarrier components are added by an inverse discrete Fourier transformation process at the base station. This causes out-of-band spectral regrowth. If simple clipping of the input signal is used, there will be in-band distortions in the transmitted signals and the bit error rate will increase substantially. This work presents a novel technique that reduces the peak-to-average power ratio (PAPR). It is a combination of two main stages, a variable clipping level and an Adaptive Optimizer that takes advantage of the channel state information sent from all users in the cell. Simulation results show that the proposed method achieves a better overall system performance than that of conventional peak reduction systems in terms of the symbol error rate. As a result, the linear output of the power amplifiers can be minimized with a great saving in cost

    Diffusion Adaptation Strategies for Distributed Optimization and Learning over Networks

    Full text link
    We propose an adaptive diffusion mechanism to optimize a global cost function in a distributed manner over a network of nodes. The cost function is assumed to consist of a collection of individual components. Diffusion adaptation allows the nodes to cooperate and diffuse information in real-time; it also helps alleviate the effects of stochastic gradient noise and measurement noise through a continuous learning process. We analyze the mean-square-error performance of the algorithm in some detail, including its transient and steady-state behavior. We also apply the diffusion algorithm to two problems: distributed estimation with sparse parameters and distributed localization. Compared to well-studied incremental methods, diffusion methods do not require the use of a cyclic path over the nodes and are robust to node and link failure. Diffusion methods also endow networks with adaptation abilities that enable the individual nodes to continue learning even when the cost function changes with time. Examples involving such dynamic cost functions with moving targets are common in the context of biological networks.Comment: 34 pages, 6 figures, to appear in IEEE Transactions on Signal Processing, 201
    • …
    corecore