267 research outputs found

    Replica Symmetry Breaking in Compressive Sensing

    Full text link
    For noisy compressive sensing systems, the asymptotic distortion with respect to an arbitrary distortion function is determined when a general class of least-square based reconstruction schemes is employed. The sampling matrix is considered to belong to a large ensemble of random matrices including i.i.d. and projector matrices, and the source vector is assumed to be i.i.d. with a desired distribution. We take a statistical mechanical approach by representing the asymptotic distortion as a macroscopic parameter of a spin glass and employing the replica method for the large-system analysis. In contrast to earlier studies, we evaluate the general replica ansatz which includes the RS ansatz as well as RSB. The generality of the solution enables us to study the impact of symmetry breaking. Our numerical investigations depict that for the reconstruction scheme with the "zero-norm" penalty function, the RS fails to predict the asymptotic distortion for relatively large compression rates; however, the one-step RSB ansatz gives a valid prediction of the performance within a larger regime of compression rates.Comment: 7 pages, 3 figures, presented at ITA 201

    Dynamical Functional Theory for Compressed Sensing

    Get PDF
    We introduce a theoretical approach for designing generalizations of the approximate message passing (AMP) algorithm for compressed sensing which are valid for large observation matrices that are drawn from an invariant random matrix ensemble. By design, the fixed points of the algorithm obey the Thouless-Anderson-Palmer (TAP) equations corresponding to the ensemble. Using a dynamical functional approach we are able to derive an effective stochastic process for the marginal statistics of a single component of the dynamics. This allows us to design memory terms in the algorithm in such a way that the resulting fields become Gaussian random variables allowing for an explicit analysis. The asymptotic statistics of these fields are consistent with the replica ansatz of the compressed sensing problem.Comment: 5 pages, accepted for ISIT 201

    On the Performance of Turbo Signal Recovery with Partial DFT Sensing Matrices

    Full text link
    This letter is on the performance of the turbo signal recovery (TSR) algorithm for partial discrete Fourier transform (DFT) matrices based compressed sensing. Based on state evolution analysis, we prove that TSR with a partial DFT sensing matrix outperforms the well-known approximate message passing (AMP) algorithm with an independent identically distributed (IID) sensing matrix.Comment: to appear in IEEE Signal Processing Letter

    Lorentzian Iterative Hard Thresholding: Robust Compressed Sensing with Prior Information

    Full text link
    Commonly employed reconstruction algorithms in compressed sensing (CS) use the L2L_2 norm as the metric for the residual error. However, it is well-known that least squares (LS) based estimators are highly sensitive to outliers present in the measurement vector leading to a poor performance when the noise no longer follows the Gaussian assumption but, instead, is better characterized by heavier-than-Gaussian tailed distributions. In this paper, we propose a robust iterative hard Thresholding (IHT) algorithm for reconstructing sparse signals in the presence of impulsive noise. To address this problem, we use a Lorentzian cost function instead of the L2L_2 cost function employed by the traditional IHT algorithm. We also modify the algorithm to incorporate prior signal information in the recovery process. Specifically, we study the case of CS with partially known support. The proposed algorithm is a fast method with computational load comparable to the LS based IHT, whilst having the advantage of robustness against heavy-tailed impulsive noise. Sufficient conditions for stability are studied and a reconstruction error bound is derived. We also derive sufficient conditions for stable sparse signal recovery with partially known support. Theoretical analysis shows that including prior support information relaxes the conditions for successful reconstruction. Simulation results demonstrate that the Lorentzian-based IHT algorithm significantly outperform commonly employed sparse reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments. Numerical results also demonstrate that the partially known support inclusion improves the performance of the proposed algorithm, thereby requiring fewer samples to yield an approximate reconstruction.Comment: 28 pages, 9 figures, accepted in IEEE Transactions on Signal Processin

    On Sparse Vector Recovery Performance in Structurally Orthogonal Matrices via LASSO

    Get PDF
    In this paper, we consider the compressed sensing problem of reconstructing a sparse signal from an undersampled set of noisy linear measurements. The regularized least squares or least absolute shrinkage and selection operator (LASSO) formulation is used for signal estimation. The measurement matrix is assumed to be constructed by concatenating several randomly orthogonal bases, which we refer to as structurally orthogonal matrices. Such measurement matrix is highly relevant to large-scale compressive sensing applications because it facilitates rapid computation and parallel processing. Using the replica method in statistical physics, we derive the mean-squared-error (MSE) formula of reconstruction over the structurally orthogonal matrix in the large-system regime. Extensive numerical experiments are provided to verify the analytical result. We then consider the analytical result to investigate the MSE behaviors of the LASSO over the structurally orthogonal matrix, with an emphasis on performance comparisons with matrices with independent and identically distributed (i.i.d.) Gaussian entries. We find that structurally orthogonal matrices are at least as good as their i.i.d. Gaussian counterparts. Thus, the use of structurally orthogonal matrices is attractive in practical applications

    Random forests with random projections of the output space for high dimensional multi-label classification

    Full text link
    We adapt the idea of random projections applied to the output space, so as to enhance tree-based ensemble methods in the context of multi-label classification. We show how learning time complexity can be reduced without affecting computational complexity and accuracy of predictions. We also show that random output space projections may be used in order to reach different bias-variance tradeoffs, over a broad panel of benchmark problems, and that this may lead to improved accuracy while reducing significantly the computational burden of the learning stage
    • …
    corecore