11,330 research outputs found

    Recovery of signals under the high order RIP condition via prior support information

    Full text link
    In this paper we study the recovery conditions of weighted l1l_{1} minimization for signal reconstruction from incomplete linear measurements when partial prior support information is available. We obtain that a high order RIP condition can guarantee stable and robust recovery of signals in bounded l2l_{2} and Dantzig selector noise settings. Meanwhile, we not only prove that the sufficient recovery condition of weighted l1l_{1} minimization method is weaker than that of standard l1l_{1} minimization method, but also prove that weighted l1l_{1} minimization method provides better upper bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal, provided that the accuracy of prior support estimate is at least 50%50\%. Furthermore, the condition is proved sharp

    Recovery of signals by a weighted β„“2/β„“1\ell_2/\ell_1 minimization under arbitrary prior support information

    Full text link
    In this paper, we introduce a weighted β„“2/β„“1\ell_2/\ell_1 minimization to recover block sparse signals with arbitrary prior support information. When partial prior support information is available, a sufficient condition based on the high order block RIP is derived to guarantee stable and robust recovery of block sparse signals via the weighted β„“2/β„“1\ell_2/\ell_1 minimization. We then show if the accuracy of arbitrary prior block support estimate is at least 50%50\%, the sufficient recovery condition by the weighted β„“2/β„“1\ell_2/\ell_{1} minimization is weaker than that by the β„“2/β„“1\ell_2/\ell_{1} minimization, and the weighted β„“2/β„“1\ell_2/\ell_{1} minimization provides better upper bounds on the recovery error in terms of the measurement noise and the compressibility of the signal. Moreover, we illustrate the advantages of the weighted β„“2/β„“1\ell_2/\ell_1 minimization approach in the recovery performance of block sparse signals under uniform and non-uniform prior information by extensive numerical experiments. The significance of the results lies in the facts that making explicit use of block sparsity and partial support information of block sparse signals can achieve better recovery performance than handling the signals as being in the conventional sense, thereby ignoring the additional structure and prior support information in the problem

    A sharp recovery condition for sparse signals with partial support information via orthogonal matching pursuit

    Full text link
    This paper considers the exact recovery of kk-sparse signals in the noiseless setting and support recovery in the noisy case when some prior information on the support of the signals is available. This prior support consists of two parts. One part is a subset of the true support and another part is outside of the true support. For kk-sparse signals x\mathbf{x} with the prior support which is composed of gg true indices and bb wrong indices, we show that if the restricted isometry constant (RIC) Ξ΄k+b+1\delta_{k+b+1} of the sensing matrix A\mathbf{A} satisfies \begin{eqnarray*} \delta_{k+b+1}<\frac{1}{\sqrt{k-g+1}}, \end{eqnarray*} then orthogonal matching pursuit (OMP) algorithm can perfectly recover the signals x\mathbf{x} from y=Ax\mathbf{y}=\mathbf{Ax} in kβˆ’gk-g iterations. Moreover, we show the above sufficient condition on the RIC is sharp. In the noisy case, we achieve the exact recovery of the remainder support (the part of the true support outside of the prior support) for the kk-sparse signals x\mathbf{x} from y=Ax+v\mathbf{y}=\mathbf{Ax}+\mathbf{v} under appropriate conditions. For the remainder support recovery, we also obtain a necessary condition based on the minimum magnitude of partial nonzero elements of the signals x\mathbf{x}

    Compressive Sensing with Prior Support Quality Information and Application to Massive MIMO Channel Estimation with Temporal Correlation

    Full text link
    In this paper, we consider the problem of compressive sensing (CS) recovery with a prior support and the prior support quality information available. Different from classical works which exploit prior support blindly, we shall propose novel CS recovery algorithms to exploit the prior support adaptively based on the quality information. We analyze the distortion bound of the recovered signal from the proposed algorithm and we show that a better quality prior support can lead to better CS recovery performance. We also show that the proposed algorithm would converge in \mathcal{O}\left(\log\mbox{SNR}\right) steps. To tolerate possible model mismatch, we further propose some robustness designs to combat incorrect prior support quality information. Finally, we apply the proposed framework to sparse channel estimation in massive MIMO systems with temporal correlation to further reduce the required pilot training overhead.Comment: 14 double-column pages, accepted for publication in IEEE transactions on signal processing in May, 201

    Cross Validation in Compressive Sensing and its Application of OMP-CV Algorithm

    Full text link
    Compressive sensing (CS) is a data acquisition technique that measures sparse or compressible signals at a sampling rate lower than their Nyquist rate. Results show that sparse signals can be reconstructed using greedy algorithms, often requiring prior knowledge such as the signal sparsity or the noise level. As a substitute to prior knowledge, cross validation (CV), a statistical method that examines whether a model overfits its data, has been proposed to determine the stopping condition of greedy algorithms. This paper first analyzes cross validation in a general compressive sensing framework and developed general cross validation techniques which could be used to understand CV-based sparse recovery algorithms. Furthermore, we provide theoretical analysis for OMP-CV, a cross validation modification of orthogonal matching pursuit, which has very good sparse recovery performance. Finally, numerical experiments are given to validate our theoretical results and investigate the behaviors of OMP-CV

    Dynamic Filtering of Time-Varying Sparse Signals via l1 Minimization

    Full text link
    Despite the importance of sparsity signal models and the increasing prevalence of high-dimensional streaming data, there are relatively few algorithms for dynamic filtering of time-varying sparse signals. Of the existing algorithms, fewer still provide strong performance guarantees. This paper examines two algorithms for dynamic filtering of sparse signals that are based on efficient l1 optimization methods. We first present an analysis for one simple algorithm (BPDN-DF) that works well when the system dynamics are known exactly. We then introduce a novel second algorithm (RWL1-DF) that is more computationally complex than BPDN-DF but performs better in practice, especially in the case where the system dynamics model is inaccurate. Robustness to model inaccuracy is achieved by using a hierarchical probabilistic data model and propagating higher-order statistics from the previous estimate (akin to Kalman filtering) in the sparse inference process. We demonstrate the properties of these algorithms on both simulated data as well as natural video sequences. Taken together, the algorithms presented in this paper represent the first strong performance analysis of dynamic filtering algorithms for time-varying sparse signals as well as state-of-the-art performance in this emerging application.Comment: 26 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:1208.032

    Blind Recovery of Sparse Signals from Subsampled Convolution

    Full text link
    Subsampled blind deconvolution is the recovery of two unknown signals from samples of their convolution. To overcome the ill-posedness of this problem, solutions based on priors tailored to specific application have been developed in practical applications. In particular, sparsity models have provided promising priors. However, in spite of empirical success of these methods in many applications, existing analyses are rather limited in two main ways: by disparity between the theoretical assumptions on the signal and/or measurement model versus practical setups; or by failure to provide a performance guarantee for parameter values within the optimal regime defined by the information theoretic limits. In particular, it has been shown that a naive sparsity model is not a strong enough prior for identifiability in the blind deconvolution problem. Instead, in addition to sparsity, we adopt a conic constraint, which enforces spectral flatness of the signals. Under this prior, we provide an iterative algorithm that achieves guaranteed performance in blind deconvolution at near optimal sample complexity. Numerical results show the empirical performance of the iterative algorithm agrees with the performance guarantee

    Support Recovery with Orthogonal Matching Pursuit in the Presence of Noise: A New Analysis

    Full text link
    Support recovery of sparse signals from compressed linear measurements is a fundamental problem in compressed sensing (CS). In this paper, we study the orthogonal matching pursuit (OMP) algorithm for the recovery of support under noise. We consider two signal-to-noise ratio (SNR) settings: i) the SNR depends on the sparsity level KK of input signals, and ii) the SNR is an absolute constant independent of KK. For the first setting, we establish necessary and sufficient conditions for the exact support recovery with OMP, expressed as lower bounds on the SNR. Our results indicate that in order to ensure the exact support recovery of all KK-sparse signals with the OMP algorithm, the SNR must at least scale linearly with the sparsity level KK. In the second setting, since the necessary condition on the SNR is not fulfilled, the exact support recovery with OMP is impossible. However, our analysis shows that recovery with an arbitrarily small but constant fraction of errors is possible with the OMP algorithm. This result may be useful for some practical applications where obtaining some large fraction of support positions is adequate.Comment: 13 page

    Recovery analysis for weighted mixed β„“2/β„“p\ell_2/\ell_p minimization with 0<p≀10<p\leq 1

    Full text link
    We study the recovery conditions of weighted mixed β„“2/β„“p (0<p≀1)\ell_2/\ell_p\,(0<p\leq 1) minimization for block sparse signal reconstruction from compressed measurements when partial block support information is available. We show that the block pp-restricted isometry property (RIP) can ensure the robust recovery. Moreover, we present the sufficient and necessary condition for the recovery by using weighted block pp-null space property. The relationship between the block pp-RIP and the weighted block pp-null space property has been established. Finally, we illustrate our results with a series of numerical experiments

    Non-Convex Compressed Sensing Using Partial Support Information

    Full text link
    In this paper we address the recovery conditions of weighted β„“p\ell_p minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that weighted β„“p\ell_p minimization with 0<p<10<p<1 is stable and robust under weaker sufficient conditions compared to weighted β„“1\ell_1 minimization. Moreover, the sufficient recovery conditions of weighted β„“p\ell_p are weaker than those of regular β„“p\ell_p minimization if at least 5050% of the support estimate is accurate. We also review some algorithms which exist to solve the non-convex β„“p\ell_p problem and illustrate our results with numerical experiments.Comment: 22 pages, 10 figure
    • …
    corecore