46 research outputs found

    Support Recovery with Sparsely Sampled Free Random Matrices

    Full text link
    Consider a Bernoulli-Gaussian complex nn-vector whose components are Vi=XiBiV_i = X_i B_i, with X_i \sim \Cc\Nc(0,\Pc_x) and binary BiB_i mutually independent and iid across ii. This random qq-sparse vector is multiplied by a square random matrix \Um, and a randomly chosen subset, of average size npn p, p∈[0,1]p \in [0,1], of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where \Um is typically %A16 the identity or a matrix with iid components, to allow \Um satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verd\'u, as well as a number of information theoretic bounds, to study the input-output mutual information and the support recovery error rate in the limit of n→∞n \to \infty. We also extend the scope of the large deviation approach of Rangan, Fletcher and Goyal and characterize the performance of a class of estimators encompassing thresholded linear MMSE and ℓ1\ell_1 relaxation

    On the Performance of Turbo Signal Recovery with Partial DFT Sensing Matrices

    Full text link
    This letter is on the performance of the turbo signal recovery (TSR) algorithm for partial discrete Fourier transform (DFT) matrices based compressed sensing. Based on state evolution analysis, we prove that TSR with a partial DFT sensing matrix outperforms the well-known approximate message passing (AMP) algorithm with an independent identically distributed (IID) sensing matrix.Comment: to appear in IEEE Signal Processing Letter

    Dynamical Functional Theory for Compressed Sensing

    Get PDF
    We introduce a theoretical approach for designing generalizations of the approximate message passing (AMP) algorithm for compressed sensing which are valid for large observation matrices that are drawn from an invariant random matrix ensemble. By design, the fixed points of the algorithm obey the Thouless-Anderson-Palmer (TAP) equations corresponding to the ensemble. Using a dynamical functional approach we are able to derive an effective stochastic process for the marginal statistics of a single component of the dynamics. This allows us to design memory terms in the algorithm in such a way that the resulting fields become Gaussian random variables allowing for an explicit analysis. The asymptotic statistics of these fields are consistent with the replica ansatz of the compressed sensing problem.Comment: 5 pages, accepted for ISIT 201

    RSB Decoupling Property of MAP Estimators

    Full text link
    The large-system decoupling property of a MAP estimator is studied when it estimates the i.i.d. vector x\boldsymbol{x} from the observation y=Ax+z\boldsymbol{y}=\mathbf{A}\boldsymbol{x}+\boldsymbol{z} with A\mathbf{A} being chosen from a wide range of matrix ensembles, and the noise vector z\boldsymbol{z} being i.i.d. and Gaussian. Using the replica method, we show that the marginal joint distribution of any two corresponding input and output symbols converges to a deterministic distribution which describes the input-output distribution of a single user system followed by a MAP estimator. Under the bbRSB assumption, the single user system is a scalar channel with additive noise where the noise term is given by the sum of an independent Gaussian random variable and bb correlated interference terms. As the bbRSB assumption reduces to RS, the interference terms vanish which results in the formerly studied RS decoupling principle.Comment: 5 pages, presented in Information Theory Workshop 201

    Replica Symmetry Breaking in Compressive Sensing

    Full text link
    For noisy compressive sensing systems, the asymptotic distortion with respect to an arbitrary distortion function is determined when a general class of least-square based reconstruction schemes is employed. The sampling matrix is considered to belong to a large ensemble of random matrices including i.i.d. and projector matrices, and the source vector is assumed to be i.i.d. with a desired distribution. We take a statistical mechanical approach by representing the asymptotic distortion as a macroscopic parameter of a spin glass and employing the replica method for the large-system analysis. In contrast to earlier studies, we evaluate the general replica ansatz which includes the RS ansatz as well as RSB. The generality of the solution enables us to study the impact of symmetry breaking. Our numerical investigations depict that for the reconstruction scheme with the "zero-norm" penalty function, the RS fails to predict the asymptotic distortion for relatively large compression rates; however, the one-step RSB ansatz gives a valid prediction of the performance within a larger regime of compression rates.Comment: 7 pages, 3 figures, presented at ITA 201

    Signal Estimation with Additive Error Metrics in Compressed Sensing

    Full text link
    Compressed sensing typically deals with the estimation of a system input from its noise-corrupted linear measurements, where the number of measurements is smaller than the number of input components. The performance of the estimation process is usually quantified by some standard error metric such as squared error or support set error. In this correspondence, we consider a noisy compressed sensing problem with any arbitrary error metric. We propose a simple, fast, and highly general algorithm that estimates the original signal by minimizing the error metric defined by the user. We verify that our algorithm is optimal owing to the decoupling principle, and we describe a general method to compute the fundamental information-theoretic performance limit for any error metric. We provide two example metrics --- minimum mean absolute error and minimum mean support error --- and give the theoretical performance limits for these two cases. Experimental results show that our algorithm outperforms methods such as relaxed belief propagation (relaxed BP) and compressive sampling matching pursuit (CoSaMP), and reaches the suggested theoretical limits for our two example metrics.Comment: to appear in IEEE Trans. Inf. Theor

    Expectation Propagation for Approximate Inference: Free Probability Framework

    Full text link
    We study asymptotic properties of expectation propagation (EP) -- a method for approximate inference originally developed in the field of machine learning. Applied to generalized linear models, EP iteratively computes a multivariate Gaussian approximation to the exact posterior distribution. The computational complexity of the repeated update of covariance matrices severely limits the application of EP to large problem sizes. In this study, we present a rigorous analysis by means of free probability theory that allows us to overcome this computational bottleneck if specific data matrices in the problem fulfill certain properties of asymptotic freeness. We demonstrate the relevance of our approach on the gene selection problem of a microarray dataset.Comment: Both authors are co-first authors. The main body of this paper is accepted for publication in the proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT
    corecore