1,407 research outputs found

    A typical reconstruction limit of compressed sensing based on Lp-norm minimization

    Full text link
    We consider the problem of reconstructing an NN-dimensional continuous vector \bx from PP constraints which are generated by its linear transformation under the assumption that the number of non-zero elements of \bx is typically limited to ρN\rho N (0≀ρ≀10\le \rho \le 1). Problems of this type can be solved by minimizing a cost function with respect to the LpL_p-norm ||\bx||_p=\lim_{\epsilon \to +0}\sum_{i=1}^N |x_i|^{p+\epsilon}, subject to the constraints under an appropriate condition. For several pp, we assess a typical case limit αc(ρ)\alpha_c(\rho), which represents a critical relation between α=P/N\alpha=P/N and ρ\rho for successfully reconstructing the original vector by minimization for typical situations in the limit N,P→∞N,P \to \infty with keeping α\alpha finite, utilizing the replica method. For p=1p=1, αc(ρ)\alpha_c(\rho) is considerably smaller than its worst case counterpart, which has been rigorously derived by existing literature of information theory.Comment: 12 pages, 2 figure

    Compressed sensing reconstruction using Expectation Propagation

    Full text link
    Many interesting problems in fields ranging from telecommunications to computational biology can be formalized in terms of large underdetermined systems of linear equations with additional constraints or regularizers. One of the most studied ones, the Compressed Sensing problem (CS), consists in finding the solution with the smallest number of non-zero components of a given system of linear equations y=Fw\boldsymbol y = \mathbf{F} \boldsymbol{w} for known measurement vector y\boldsymbol{y} and sensing matrix F\mathbf{F}. Here, we will address the compressed sensing problem within a Bayesian inference framework where the sparsity constraint is remapped into a singular prior distribution (called Spike-and-Slab or Bernoulli-Gauss). Solution to the problem is attempted through the computation of marginal distributions via Expectation Propagation (EP), an iterative computational scheme originally developed in Statistical Physics. We will show that this strategy is comparatively more accurate than the alternatives in solving instances of CS generated from statistically correlated measurement matrices. For computational strategies based on the Bayesian framework such as variants of Belief Propagation, this is to be expected, as they implicitly rely on the hypothesis of statistical independence among the entries of the sensing matrix. Perhaps surprisingly, the method outperforms uniformly also all the other state-of-the-art methods in our tests.Comment: 20 pages, 6 figure

    Worst Configurations (Instantons) for Compressed Sensing over Reals: a Channel Coding Approach

    Full text link
    We consider the Linear Programming (LP) solution of the Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees error-free reconstruction with a properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop an algorithm to discover the sparsest vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design a CS-Instanton Search Algorithm (ISA) generating a sparse vector, called a CS-instanton, such that the BasP fails on the CS-instanton, while the BasP recovery is successful for any modification of the CS-instanton replacing a nonzero element by zero. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. The performance of the CS-ISA is illustrated on a randomly generated 120×512120\times 512 matrix. For this example, the CS-ISA outputs the shortest instanton (error vector) pattern of length 11.Comment: Accepted to be presented at the IEEE International Symposium on Information Theory (ISIT 2010). 5 pages, 2 Figures. Minor edits from previous version. Added a new reference

    Optimal incorporation of sparsity information by weighted ℓ1\ell_1 optimization

    Full text link
    Compressed sensing of sparse sources can be improved by incorporating prior knowledge of the source. In this paper we demonstrate a method for optimal selection of weights in weighted L1L_1 norm minimization for a noiseless reconstruction model, and show the improvements in compression that can be achieved.Comment: 5 pages, 2 figures, to appear in Proceedings of ISIT201
    • 

    corecore