12,057 research outputs found

    Monte Carlo methods for compressed sensing

    No full text
    In this paper we study Monte Carlo type approaches to Bayesian sparse inference under a squared error loss. This problem arises in Compressed Sensing, where sparse signals are to be estimated and where recovery performance is measured in terms of the expected sum of squared error. In this setting, it is common knowledge that the mean over the posterior is the optimal estimator. The problem is however that the posterior distribution has to be estimated, which is extremely difficult. We here contrast approaches that use a Monte Carlo estimate for the posterior mean. The randomised Iterative Hard Thresholding algorithm is compared to a new approach that is inspired by sequential importance sampling and uses a bootstrap re-sampling step based on importance weights

    Compressed sensing of data with a known distribution

    Full text link
    Compressed sensing is a technique for recovering an unknown sparse signal from a small number of linear measurements. When the measurement matrix is random, the number of measurements required for perfect recovery exhibits a phase transition: there is a threshold on the number of measurements after which the probability of exact recovery quickly goes from very small to very large. In this work we are able to reduce this threshold by incorporating statistical information about the data we wish to recover. Our algorithm works by minimizing a suitably weighted â„“1\ell_1-norm, where the weights are chosen so that the expected statistical dimension of the corresponding descent cone is minimized. We also provide new discrete-geometry-based Monte Carlo algorithms for computing intrinsic volumes of such descent cones, allowing us to bound the failure probability of our methods.Comment: 22 pages, 7 figures. New colorblind safe figures. Sections 3 and 4 completely rewritten. Minor typos fixe

    Model-Based Calibration of Filter Imperfections in the Random Demodulator for Compressive Sensing

    Full text link
    The random demodulator is a recent compressive sensing architecture providing efficient sub-Nyquist sampling of sparse band-limited signals. The compressive sensing paradigm requires an accurate model of the analog front-end to enable correct signal reconstruction in the digital domain. In practice, hardware devices such as filters deviate from their desired design behavior due to component variations. Existing reconstruction algorithms are sensitive to such deviations, which fall into the more general category of measurement matrix perturbations. This paper proposes a model-based technique that aims to calibrate filter model mismatches to facilitate improved signal reconstruction quality. The mismatch is considered to be an additive error in the discretized impulse response. We identify the error by sampling a known calibrating signal, enabling least-squares estimation of the impulse response error. The error estimate and the known system model are used to calibrate the measurement matrix. Numerical analysis demonstrates the effectiveness of the calibration method even for highly deviating low-pass filter responses. The proposed method performance is also compared to a state of the art method based on discrete Fourier transform trigonometric interpolation.Comment: 10 pages, 8 figures, submitted to IEEE Transactions on Signal Processin
    • …
    corecore