2,507 research outputs found

    Application of Compressive Sensing Techniques in Distributed Sensor Networks: A Survey

    Full text link
    In this survey paper, our goal is to discuss recent advances of compressive sensing (CS) based solutions in wireless sensor networks (WSNs) including the main ongoing/recent research efforts, challenges and research trends in this area. In WSNs, CS based techniques are well motivated by not only the sparsity prior observed in different forms but also by the requirement of efficient in-network processing in terms of transmit power and communication bandwidth even with nonsparse signals. In order to apply CS in a variety of WSN applications efficiently, there are several factors to be considered beyond the standard CS framework. We start the discussion with a brief introduction to the theory of CS and then describe the motivational factors behind the potential use of CS in WSN applications. Then, we identify three main areas along which the standard CS framework is extended so that CS can be efficiently applied to solve a variety of problems specific to WSNs. In particular, we emphasize on the significance of extending the CS framework to (i). take communication constraints into account while designing projection matrices and reconstruction algorithms for signal reconstruction in centralized as well in decentralized settings, (ii) solve a variety of inference problems such as detection, classification and parameter estimation, with compressed data without signal reconstruction and (iii) take practical communication aspects such as measurement quantization, physical layer secrecy constraints, and imperfect channel conditions into account. Finally, open research issues and challenges are discussed in order to provide perspectives for future research directions

    Bernoulli-Gaussian Approximate Message-Passing Algorithm for Compressed Sensing with 1D-Finite-Difference Sparsity

    Full text link
    This paper proposes a fast approximate message-passing (AMP) algorithm for solving compressed sensing (CS) recovery problems with 1D-finite-difference sparsity in term of MMSE estimation. The proposed algorithm, named ssAMP-BGFD, is low-computational with its fast convergence and cheap per-iteration cost, providing phase transition nearly approaching to the state-of-the-art. The proposed algorithm is originated from a sum-product message-passing rule, applying a Bernoulli-Gaussian (BG) prior, seeking an MMSE solution. The algorithm construction includes not only the conventional AMP technique for the measurement fidelity, but also suggests a simplified message-passing method to promote the signal sparsity in finite-difference. Furthermore, we provide an EM-tuning methodology to learn the BG prior parameters, suggesting how to use some practical measurement matrices satisfying the RIP requirement under the ssAMP-BGFD recovery. Extensive empirical results confirms performance of the proposed algorithm, in phase transition, convergence speed, and CPU runtime, compared to the recent algorithms.Comment: 17 pages, 13 figures, submitted to the IEEE Transactions on Signal Processin

    An Approach to Complex Bayesian-optimal Approximate Message Passing

    Full text link
    In this work we aim to solve the compressed sensing problem for the case of a complex unknown vector by utilizing the Bayesian-optimal structured signal approximate message passing (BOSSAMP) algorithm on the jointly sparse real and imaginary parts of the unknown. By introducing a latent activity variable, BOSSAMP separates the tasks of activity detection and value estimation to overcome the problem of detecting different supports in the real and imaginary parts. We complement the recovery algorithm by two novel support detection schemes that utilize the updated auxiliary variables of BOSSAMP. Simulations show the superiority of our proposed method against approximate message passing (AMP) and its Bayesian-optimal sibling (BAMP), both in mean squared error and support detection performance

    An Approximate Message Passing Framework for Side Information

    Full text link
    Approximate message passing (AMP) methods have gained recent traction in sparse signal recovery. Additional information about the signal, or \emph{side information} (SI), is commonly available and can aid in efficient signal recovery. This work presents an AMP-based framework that exploits SI and can be readily implemented in various settings for which the SI results in separable distributions. To illustrate the simplicity and applicability of our approach, this framework is applied to a Bernoulli-Gaussian (BG) model and a time-varying birth-death-drift (BDD) signal model, motivated by applications in channel estimation. We develop a suite of algorithms, called AMP-SI, and derive denoisers for the BDD and BG models. Numerical evidence demonstrating the advantages of our approach are presented alongside empirical evidence of the accuracy of a proposed state evolution

    From Denoising to Compressed Sensing

    Full text link
    A denoising algorithm seeks to remove noise, errors, or perturbations from a signal. Extensive research has been devoted to this arena over the last several decades, and as a result, today's denoisers can effectively remove large amounts of additive white Gaussian noise. A compressed sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, we develop an extension of the approximate message passing (AMP) framework, called Denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations. We demonstrate that, when used with a high performance denoiser for natural images, D-AMP offers state-of-the-art CS recovery performance while operating tens of times faster than competing methods. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove

    Graphical Models Concepts in Compressed Sensing

    Full text link
    This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ell_1 penalized least-squares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact high-dimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on `Compressed Sensing' edited by Yonina Eldar and Gitta Kutyniok.Comment: 43 pages, 22 eps figures, typos correcte

    Spatio-temporal Spike and Slab Priors for Multiple Measurement Vector Problems

    Full text link
    We are interested in solving the multiple measurement vector (MMV) problem for instances, where the underlying sparsity pattern exhibit spatio-temporal structure motivated by the electroencephalogram (EEG) source localization problem. We propose a probabilistic model that takes this structure into account by generalizing the structured spike and slab prior and the associated Expectation Propagation inference scheme. Based on numerical experiments, we demonstrate the viability of the model and the approximate inference scheme.Comment: 6 pages, 6 figures, accepted for presentation at SPARS 201

    Hyperspectral Unmixing via Turbo Bilinear Approximate Message Passing

    Full text link
    The goal of hyperspectral unmixing is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels into N constituent material spectra (or "end-members") with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing based on loopy belief propagation (BP) that enables the exploitation of spectral coherence in the endmembers and spatial coherence in the abundances. In particular, we partition the factor graph into spectral coherence, spatial coherence, and bilinear subgraphs, and pass messages between them using a "turbo" approach. To perform message passing within the bilinear subgraph, we employ the bilinear generalized approximate message passing algorithm (BiG-AMP), a recently proposed belief-propagation-based approach to matrix factorization. Furthermore, we propose an expectation-maximization (EM) strategy to tune the prior parameters and a model-order selection strategy to select the number of materials N. Numerical experiments conducted with both synthetic and real-world data show favorable unmixing performance relative to existing methods

    A GAMP Based Low Complexity Sparse Bayesian Learning Algorithm

    Full text link
    In this paper, we present an algorithm for the sparse signal recovery problem that incorporates damped Gaussian generalized approximate message passing (GGAMP) into Expectation-Maximization (EM)-based sparse Bayesian learning (SBL). In particular, GGAMP is used to implement the E-step in SBL in place of matrix inversion, leveraging the fact that GGAMP is guaranteed to converge with appropriate damping. The resulting GGAMP-SBL algorithm is much more robust to arbitrary measurement matrix A\boldsymbol{A} than the standard damped GAMP algorithm while being much lower complexity than the standard SBL algorithm. We then extend the approach from the single measurement vector (SMV) case to the temporally correlated multiple measurement vector (MMV) case, leading to the GGAMP-TSBL algorithm. We verify the robustness and computational advantages of the proposed algorithms through numerical experiments

    Low-Complexity Message Passing Based Massive MIMO Channel Estimation by Exploiting Unknown Sparse Common Support with Dirichlet Process

    Full text link
    This paper investigates the problem of estimating sparse channels in massive MIMO systems. Most wireless channels are sparse with large delay spread, while some channels can be observed having sparse common support (SCS) within a certain area of the antenna array, i.e., the antenna array can be grouped into several clusters according to the sparse supports of channels. The SCS property is attractive when it comes to the estimation of large number of channels in massive MIMO systems. Using the SCS of channels, one expects better performance, but the number of clusters and the elements for each cluster are always unknown in the receiver. In this paper, {the Dirichlet process} is exploited to model such sparse channels where those in each cluster have SCS. We proposed a low complexity message passing based sparse Bayesian learning to perform channel estimation in massive MIMO systems by using combined BP with MF on a factor graph. Simulation results demonstrate that the proposed massive MIMO sparse channel estimation outperforms the state-of-the-art algorithms. Especially, it even shows better performance than the variational Bayesian method applied for massive MIMO channel estimation.Comment: arXiv admin note: text overlap with arXiv:1409.4671 by other author
    • …
    corecore