68 research outputs found

    Diffusion Adaptation Strategies for Distributed Estimation over Gaussian Markov Random Fields

    Full text link
    The aim of this paper is to propose diffusion strategies for distributed estimation over adaptive networks, assuming the presence of spatially correlated measurements distributed according to a Gaussian Markov random field (GMRF) model. The proposed methods incorporate prior information about the statistical dependency among observations, while at the same time processing data in real-time and in a fully decentralized manner. A detailed mean-square analysis is carried out in order to prove stability and evaluate the steady-state performance of the proposed strategies. Finally, we also illustrate how the proposed techniques can be easily extended in order to incorporate thresholding operators for sparsity recovery applications. Numerical results show the potential advantages of using such techniques for distributed learning in adaptive networks deployed over GMRF.Comment: Submitted to IEEE Transactions on Signal Processing. arXiv admin note: text overlap with arXiv:1206.309

    Sparse Distributed Learning Based on Diffusion Adaptation

    Full text link
    This article proposes diffusion LMS strategies for distributed estimation over adaptive networks that are able to exploit sparsity in the underlying system model. The approach relies on convex regularization, common in compressive sensing, to enhance the detection of sparsity via a diffusive process over the network. The resulting algorithms endow networks with learning abilities and allow them to learn the sparse structure from the incoming data in real-time, and also to track variations in the sparsity of the model. We provide convergence and mean-square performance analysis of the proposed method and show under what conditions it outperforms the unregularized diffusion version. We also show how to adaptively select the regularization parameter. Simulation results illustrate the advantage of the proposed filters for sparse data recovery.Comment: to appear in IEEE Trans. on Signal Processing, 201

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness

    Robust Linear Regression Analysis - A Greedy Approach

    Full text link
    The task of robust linear estimation in the presence of outliers is of particular importance in signal processing, statistics and machine learning. Although the problem has been stated a few decades ago and solved using classical (considered nowadays) methods, recently it has attracted more attention in the context of sparse modeling, where several notable contributions have been made. In the present manuscript, a new approach is considered in the framework of greedy algorithms. The noise is split into two components: a) the inlier bounded noise and b) the outliers, which are explicitly modeled by employing sparsity arguments. Based on this scheme, a novel efficient algorithm (Greedy Algorithm for Robust Denoising - GARD), is derived. GARD alternates between a least square optimization criterion and an Orthogonal Matching Pursuit (OMP) selection step that identifies the outliers. The case where only outliers are present has been studied separately, where bounds on the \textit{Restricted Isometry Property} guarantee that the recovery of the signal via GARD is exact. Moreover, theoretical results concerning convergence as well as the derivation of error bounds in the case of additional bounded noise are discussed. Finally, we provide extensive simulations, which demonstrate the comparative advantages of the new technique
    • …
    corecore