4,081 research outputs found

    CS Decomposition Based Bayesian Subspace Estimation

    Get PDF
    In numerous applications, it is required to estimate the principal subspace of the data, possibly from a very limited number of samples. Additionally, it often occurs that some rough knowledge about this subspace is available and could be used to improve subspace estimation accuracy in this case. This is the problem we address herein and, in order to solve it, a Bayesian approach is proposed. The main idea consists of using the CS decomposition of the semi-orthogonal matrix whose columns span the subspace of interest. This parametrization is intuitively appealing and allows for non informative prior distributions of the matrices involved in the CS decomposition and very mild assumptions about the angles between the actual subspace and the prior subspace. The posterior distributions are derived and a Gibbs sampling scheme is presented to obtain the minimum mean-square distance estimator of the subspace of interest. Numerical simulations and an application to real hyperspectral data assess the validity and the performances of the estimator

    Sparse-Based Estimation Performance for Partially Known Overcomplete Large-Systems

    Get PDF
    We assume the direct sum o for the signal subspace. As a result of post- measurement, a number of operational contexts presuppose the a priori knowledge of the LB -dimensional "interfering" subspace and the goal is to estimate the LA am- plitudes corresponding to subspace . Taking into account the knowledge of the orthogonal "interfering" subspace \perp, the Bayesian estimation lower bound is de- rivedfortheLA-sparsevectorinthedoublyasymptoticscenario,i.e. N,LA,LB -> \infty with a finite asymptotic ratio. By jointly exploiting the Compressed Sensing (CS) and the Random Matrix Theory (RMT) frameworks, closed-form expressions for the lower bound on the estimation of the non-zero entries of a sparse vector of interest are derived and studied. The derived closed-form expressions enjoy several interesting features: (i) a simple interpretable expression, (ii) a very low computational cost especially in the doubly asymptotic scenario, (iii) an accurate prediction of the mean-square-error (MSE) of popular sparse-based estimators and (iv) the lower bound remains true for any amplitudes vector priors. Finally, several idealized scenarios are compared to the derived bound for a common output signal-to-noise-ratio (SNR) which shows the in- terest of the joint estimation/rejection methodology derived herein.Comment: 10 pages, 5 figures, Journal of Signal Processin

    Likelihood-informed dimension reduction for nonlinear inverse problems

    Get PDF
    The intrinsic dimensionality of an inverse problem is affected by prior information, the accuracy and number of observations, and the smoothing properties of the forward operator. From a Bayesian perspective, changes from the prior to the posterior may, in many problems, be confined to a relatively low-dimensional subspace of the parameter space. We present a dimension reduction approach that defines and identifies such a subspace, called the "likelihood-informed subspace" (LIS), by characterizing the relative influences of the prior and the likelihood over the support of the posterior distribution. This identification enables new and more efficient computational methods for Bayesian inference with nonlinear forward models and Gaussian priors. In particular, we approximate the posterior distribution as the product of a lower-dimensional posterior defined on the LIS and the prior distribution marginalized onto the complementary subspace. Markov chain Monte Carlo sampling can then proceed in lower dimensions, with significant gains in computational efficiency. We also introduce a Rao-Blackwellization strategy that de-randomizes Monte Carlo estimates of posterior expectations for additional variance reduction. We demonstrate the efficiency of our methods using two numerical examples: inference of permeability in a groundwater system governed by an elliptic PDE, and an atmospheric remote sensing problem based on Global Ozone Monitoring System (GOMOS) observations

    Likelihood informed dimension reduction for inverse problems in remote sensing of atmospheric constituent profiles

    Full text link
    We use likelihood informed dimension reduction (LIS) (T. Cui et al. 2014) for inverting vertical profile information of atmospheric methane from ground based Fourier transform infrared (FTIR) measurements at Sodankyl\"a, Northern Finland. The measurements belong to the word wide TCCON network for greenhouse gas measurements and, in addition to providing accurate greenhouse gas measurements, they are important for validating satellite observations. LIS allows construction of an efficient Markov chain Monte Carlo sampling algorithm that explores only a reduced dimensional space but still produces a good approximation of the original full dimensional Bayesian posterior distribution. This in effect makes the statistical estimation problem independent of the discretization of the inverse problem. In addition, we compare LIS to a dimension reduction method based on prior covariance matrix truncation used earlier (S. Tukiainen et al. 2016)

    Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing

    Full text link
    In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal filtering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation-maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCS-AMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCS-AMP is capable of offering state-of-the-art performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure

    Off-grid Direction of Arrival Estimation Using Sparse Bayesian Inference

    Full text link
    Direction of arrival (DOA) estimation is a classical problem in signal processing with many practical applications. Its research has recently been advanced owing to the development of methods based on sparse signal reconstruction. While these methods have shown advantages over conventional ones, there are still difficulties in practical situations where true DOAs are not on the discretized sampling grid. To deal with such an off-grid DOA estimation problem, this paper studies an off-grid model that takes into account effects of the off-grid DOAs and has a smaller modeling error. An iterative algorithm is developed based on the off-grid model from a Bayesian perspective while joint sparsity among different snapshots is exploited by assuming a Laplace prior for signals at all snapshots. The new approach applies to both single snapshot and multi-snapshot cases. Numerical simulations show that the proposed algorithm has improved accuracy in terms of mean squared estimation error. The algorithm can maintain high estimation accuracy even under a very coarse sampling grid.Comment: To appear in the IEEE Trans. Signal Processing. This is a revised, shortened version of version
    • 

    corecore