48,507 research outputs found

    A Study on Image Reconfiguration Algorithm of Compressed Sensing

    Get PDF
    Compressed sensing theory is a subversion of the traditional theory. The theory obtains data sampling points while achieves data compression. The main content of this thesis is reconstruction algorithm. It’s the key of the compressed sensing theory, which directly determines the quality of reconstructed signal, reconstruction speed and application effect. In this paper, we have studied the theory of compressed sensing and the existing reconstruction algorithms, then choosing three algorithms (OMP, CoSaMP, StOMP) as the research. On the basis of summarizing the existing algorithms and models, we analyze the results such as PSNR, relative error, matching ratio and running time of them from image signal respectively. In the three reconstruction algorithms, OMP algorithm has the best accuracy for image reconstruction. The convergence speed of CoSaMP algorithm is faster than that of the OMP algorithm’s, but it depends on sparsity K quietly. StOMP algorithm on image reconstruction effect is the best, and the convergence speed is also the fastest

    Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions

    Get PDF
    In this paper, we study the problem of compressed sensing using binary measurement matrices and 1\ell_1-norm minimization (basis pursuit) as the recovery algorithm. We derive new upper and lower bounds on the number of measurements to achieve robust sparse recovery with binary matrices. We establish sufficient conditions for a column-regular binary matrix to satisfy the robust null space property (RNSP) and show that the associated sufficient conditions % sparsity bounds for robust sparse recovery obtained using the RNSP are better by a factor of (33)/22.6(3 \sqrt{3})/2 \approx 2.6 compared to the sufficient conditions obtained using the restricted isometry property (RIP). Next we derive universal \textit{lower} bounds on the number of measurements that any binary matrix needs to have in order to satisfy the weaker sufficient condition based on the RNSP and show that bipartite graphs of girth six are optimal. Then we display two classes of binary matrices, namely parity check matrices of array codes and Euler squares, which have girth six and are nearly optimal in the sense of almost satisfying the lower bound. In principle, randomly generated Gaussian measurement matrices are "order-optimal". So we compare the phase transition behavior of the basis pursuit formulation using binary array codes and Gaussian matrices and show that (i) there is essentially no difference between the phase transition boundaries in the two cases and (ii) the CPU time of basis pursuit with binary matrices is hundreds of times faster than with Gaussian matrices and the storage requirements are less. Therefore it is suggested that binary matrices are a viable alternative to Gaussian matrices for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table

    Statistical Compressed Sensing of Gaussian Mixture Models

    Full text link
    A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the best k-term approximation with probability one, and the bound constant can be efficiently calculated. For Gaussian mixture models (GMMs), that assume multiple Gaussian distributions and that each signal follows one of them with an unknown index, a piecewise linear estimator is introduced to decode SCS. The accuracy of model selection, at the heart of the piecewise linear decoder, is analyzed in terms of the properties of the Gaussian distributions and the number of sensing measurements. A maximum a posteriori expectation-maximization algorithm that iteratively estimates the Gaussian models parameters, the signals model selection, and decodes the signals, is presented for GMM-based SCS. In real image sensing applications, GMM-based SCS is shown to lead to improved results compared to conventional CS, at a considerably lower computational cost
    corecore