3 research outputs found

    On MMSE and MAP Denoising Under Sparse Representation Modeling Over a Unitary Dictionary

    Full text link
    Among the many ways to model signals, a recent approach that draws considerable attention is sparse representation modeling. In this model, the signal is assumed to be generated as a random linear combination of a few atoms from a pre-specified dictionary. In this work we analyze two Bayesian denoising algorithms -- the Maximum-Aposteriori Probability (MAP) and the Minimum-Mean-Squared-Error (MMSE) estimators, under the assumption that the dictionary is unitary. It is well known that both these estimators lead to a scalar shrinkage on the transformed coefficients, albeit with a different response curve. In this work we start by deriving closed-form expressions for these shrinkage curves and then analyze their performance. Upper bounds on the MAP and the MMSE estimation errors are derived. We tie these to the error obtained by a so-called oracle estimator, where the support is given, establishing a worst-case gain-factor between the MAP/MMSE estimation errors and the oracle's performance. These denoising algorithms are demonstrated on synthetic signals and on true data (images).Comment: 29 pages, 10 figure

    Power-Constrained Sparse Gaussian Linear Dimensionality Reduction over Noisy Channels

    Get PDF
    In this paper, we investigate power-constrained sensing matrix design in a sparse Gaussian linear dimensionality reduction framework. Our study is carried out in a single--terminal setup as well as in a multi--terminal setup consisting of orthogonal or coherent multiple access channels (MAC). We adopt the mean square error (MSE) performance criterion for sparse source reconstruction in a system where source-to-sensor channel(s) and sensor-to-decoder communication channel(s) are noisy. Our proposed sensing matrix design procedure relies upon minimizing a lower-bound on the MSE in single-- and multiple--terminal setups. We propose a three-stage sensing matrix optimization scheme that combines semi-definite relaxation (SDR) programming, a low-rank approximation problem and power-rescaling. Under certain conditions, we derive closed-form solutions to the proposed optimization procedure. Through numerical experiments, by applying practical sparse reconstruction algorithms, we show the superiority of the proposed scheme by comparing it with other relevant methods. This performance improvement is achieved at the price of higher computational complexity. Hence, in order to address the complexity burden, we present an equivalent stochastic optimization method to the problem of interest that can be solved approximately, while still providing a superior performance over the popular methods.Comment: Accepted for publication in IEEE Transactions on Signal Processing (16 pages

    Wavelet Statistics of Sparse and Self-Similar Images

    Full text link
    corecore