10,053 research outputs found
Recommended from our members
Simultaneous Bayesian Sparse Approximation with Structured Sparse Models
Sparse approximation is key to many signal processing, image processing and machine learning applications. If multiple signals maintain some degree of dependency, for example the support sets are statistically related, then it will generally be advantageous to jointly estimate the sparse representation vectors from the measurements vectors as opposed to solving for each signal individually. In this paper, we propose simultaneous sparse Bayesian learning (SBL) for joint sparse approximation with two structured sparse models (SSMs), where one is row-sparse with embedded element-sparse, and the other one is row-sparse plus element-sparse. While SBL has attracted much attention as a means to deal with a single sparse approximation problem, it is not obvious how to extend SBL to SSMs. By capitalizing on a dual-space view of existing convex methods for SMs, we showcase the precision component model and covariance component model for SSMs, where both models involve a common hyperparameter and an innovation hyperparameter that together control the prior variance for each coefficient. The statistical perspective of precision component vs. covariance component models unfolds the intrinsic mechanism in SSMs, and also leads to our development of SBL-inspired cost functions for SSMs. Centralized algorithms, that include β1 and β2 reweighting algorithms, and consensus based decentralized algorithms are developed for simultaneous sparse approximation with SSMs. In addition, theoretical analysis is conducted to provide valuable insights into the proposed approach, which includes global minima analysis of the SBLinspired nonconvex cost functions and convergence analysis of the proposed β1 reweighting algorithms for SSMs. Superior performance of the proposed algorithms is demonstrated by numerical experiments.This is the author accepted manuscript. The final version is available from IEEE at http://dx.doi.org/10.1109/TSP.2016.2605067
Compressive Measurement Designs for Estimating Structured Signals in Structured Clutter: A Bayesian Experimental Design Approach
This work considers an estimation task in compressive sensing, where the goal
is to estimate an unknown signal from compressive measurements that are
corrupted by additive pre-measurement noise (interference, or clutter) as well
as post-measurement noise, in the specific setting where some (perhaps limited)
prior knowledge on the signal, interference, and noise is available. The
specific aim here is to devise a strategy for incorporating this prior
information into the design of an appropriate compressive measurement strategy.
Here, the prior information is interpreted as statistics of a prior
distribution on the relevant quantities, and an approach based on Bayesian
Experimental Design is proposed. Experimental results on synthetic data
demonstrate that the proposed approach outperforms traditional random
compressive measurement designs, which are agnostic to the prior information,
as well as several other knowledge-enhanced sensing matrix designs based on
more heuristic notions.Comment: 5 pages, 4 figures. Accepted for publication at The Asilomar
Conference on Signals, Systems, and Computers 201
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
- β¦