5 research outputs found

    Truncated Sparse Approximation Property and Truncated qq-Norm Minimization

    Full text link
    This paper considers approximately sparse signal and low-rank matrix's recovery via truncated norm minimization min⁑xβˆ₯xTβˆ₯q\min_{x}\|x_T\|_q and min⁑Xβˆ₯XTβˆ₯Sq\min_{X}\|X_T\|_{S_q} from noisy measurements. We first introduce truncated sparse approximation property, a more general robust null space property, and establish the stable recovery of signals and matrices under the truncated sparse approximation property. We also explore the relationship between the restricted isometry property and truncated sparse approximation property. And we also prove that if a measurement matrix AA or linear map A\mathcal{A} satisfies truncated sparse approximation property of order kk, then the first inequality in restricted isometry property of order kk and of order 2k2k can hold for certain different constants Ξ΄k\delta_{k} and Ξ΄2k\delta_{2k}, respectively. Last, we show that if Ξ΄t(k+∣Tc∣)<(tβˆ’1)/t\delta_{t(k+|T^c|)}<\sqrt{(t-1)/t} for some tβ‰₯4/3t\geq 4/3, then measurement matrix AA and linear map A\mathcal{A} satisfy truncated sparse approximation property of order kk. Which should point out is that when Tc=βˆ…T^c=\emptyset, our conclusion implies that sparse approximation property of order kk is weaker than restricted isometry property of order tktk

    Deterministic Analysis of Weighted BPDN With Partially Known Support Information

    Full text link
    In this paper, with the aid of the powerful Restricted Isometry Constant (RIC), a deterministic (or say non-stochastic) analysis, which includes a series of sufficient conditions (related to the RIC order) and their resultant error estimates, is established for the weighted Basis Pursuit De-Noising (BPDN) to guarantee the robust signal recovery when Partially Known Support Information (PKSI) of the signal is available. Specifically, the obtained conditions extend nontrivially the ones induced recently for the traditional constrained weighted β„“1\ell_{1}-minimization model to those for its unconstrained counterpart, i.e., the weighted BPDN. The obtained error estimates are also comparable to the analogous ones induced previously for the robust recovery of the signals with PKSI from some constrained models. Moreover, these results to some degree may well complement the recent investigation of the weighted BPDN which is based on the stochastic analysis

    Matrix Recovery from Rank-One Projection Measurements via Nonconvex Minimization

    Full text link
    In this paper, we consider the matrix recovery from rank-one projection measurements proposed in [Cai and Zhang, Ann. Statist., 43(2015), 102-138], via nonconvex minimization. We establish a sufficient identifiability condition, which can guarantee the exact recovery of low-rank matrix via Schatten-pp minimization min⁑Xβˆ₯Xβˆ₯Spp\min_{X}\|X\|_{S_p}^p for 0<p<10<p<1 under affine constraint, and stable recovery of low-rank matrix under β„“q\ell_q constraint and Dantzig selector constraint. Our condition is also sufficient to guarantee low-rank matrix recovery via least qq minimization min⁑Xβˆ₯A(X)βˆ’bβˆ₯qq\min_{X}\|\mathcal{A}(X)-b\|_{q}^q for 0<q≀10<q\leq1. And we also extend our result to Gaussian design distribution, and show that any matrix can be stably recovered for rank-one projection from Gaussian distributions via least 11 minimization with high probability

    Coherence-Based Performance Guarantee of Regularized β„“1\ell_{1}-Norm Minimization and Beyond

    Full text link
    In this paper, we consider recovering the signal x∈Rn\bm{x}\in\mathbb{R}^{n} from its few noisy measurements b=Ax+z\bm{b}=A\bm{x}+\bm{z}, where A∈RmΓ—nA\in\mathbb{R}^{m\times n} with mβ‰ͺnm\ll n is the measurement matrix, and z∈Rm\bm{z}\in\mathbb{R}^{m} is the measurement noise/error. We first establish a coherence-based performance guarantee for a regularized β„“1\ell_{1}-norm minimization model to recover such signals x\bm{x} in the presence of the β„“2\ell_{2}-norm bounded noise, i.e., βˆ₯zβˆ₯2≀ϡ\|\bm{z}\|_{2}\leq\epsilon, and then extend these theoretical results to guarantee the robust recovery of the signals corrupted with the Dantzig Selector (DS) type noise, i.e., βˆ₯ATzβˆ₯βˆžβ‰€Ο΅\|A^{T}\bm{z}\|_{\infty}\leq\epsilon, and the structured block-sparse signal recovery in the presence of the bounded noise. To the best of our knowledge, we first extend nontrivially the sharp uniform recovery condition derived by Cai, Wang and Xu (2010) for the constrained β„“1\ell_{1}-norm minimization model, which takes the form of \begin{align*} \mu<\frac{1}{2k-1}, \end{align*} where ΞΌ\mu is defined as the (mutual) coherence of AA, to two unconstrained regularized β„“1\ell_{1}-norm minimization models to guarantee the robust recovery of any signals (not necessary to be kk-sparse) under the β„“2\ell_{2}-norm bounded noise and the DS type noise settings, respectively. Besides, a uniform recovery condition and its two resulting error estimates are also established for the first time to our knowledge, for the robust block-sparse signal recovery using a regularized mixed β„“2/β„“1\ell_{2}/\ell_{1}-norm minimization model, and these results well complement the existing theoretical investigation on this model which focuses on the non-uniform recovery conditions and/or the robust signal recovery in presence of the random noise.Comment: 19 page

    The high-order block RIP for non-convex block-sparse compressed sensing

    Full text link
    This paper concentrates on the recovery of block-sparse signals, which is not only sparse but also nonzero elements are arrayed into some blocks (clusters) rather than being arbitrary distributed all over the vector, from linear measurements. We establish high-order sufficient conditions based on block RIP to ensure the exact recovery of every block ss-sparse signal in the noiseless case via mixed l2/lpl_2/l_p minimization method, and the stable and robust recovery in the case that signals are not accurately block-sparse in the presence of noise. Additionally, a lower bound on necessary number of random Gaussian measurements is gained for the condition to be true with overwhelming probability. Furthermore, the numerical experiments conducted demonstrate the performance of the proposed algorithm
    corecore