46 research outputs found

    Hidden cliques and the certification of the restricted isometry property

    Get PDF
    International audienceCompressed sensing is a technique for finding sparse solutions to underdetermined linear systems. This technique relies on properties of the sensing matrix such as the restricted isometry property. Sensing matrices that satisfy this property with optimal parameters are mainly obtained via probabilistic arguments. Deciding whether a given matrix satisfies the restricted isometry property is a non-trivial computational problem. Indeed, we show in this paper that restricted isometry parameters cannot be approximated in polynomial time within any constant factor under the assumption that the hidden clique problem is hard. Moreover, on the positive side we propose an improvement on the brute-force enumeration algorithm for checking the restricted isometry property

    Average-case Hardness of RIP Certification

    Get PDF
    The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models. It is of high interest in compressed sensing and statistical learning. This property is particularly important for computationally efficient recovery methods. As a consequence, even though it is in general NP-hard to check that RIP holds, there have been substantial efforts to find tractable proxies for it. These would allow the construction of RIP matrices and the polynomial-time verification of RIP given an arbitrary matrix. We consider the framework of average-case certifiers, that never wrongly declare that a matrix is RIP, while being often correct for random instances. While there are such functions which are tractable in a suboptimal parameter regime, we show that this is a computationally hard task in any better regime. Our results are based on a new, weaker assumption on the problem of detecting dense subgraphs

    Computational Complexity of Certifying Restricted Isometry Property

    Get PDF
    Given a matrix AA with nn rows, a number k<nk<n, and 0<δ<10<\delta < 1, AA is (k,δ)(k,\delta)-RIP (Restricted Isometry Property) if, for any vector x∈Rnx \in \mathbb{R}^n, with at most kk non-zero co-ordinates, (1−δ)∥x∥2≤∥Ax∥2≤(1+δ)∥x∥2(1-\delta) \|x\|_2 \leq \|A x\|_2 \leq (1+\delta)\|x\|_2 In many applications, such as compressed sensing and sparse recovery, it is desirable to construct RIP matrices with a large kk and a small δ\delta. Given the efficacy of random constructions in generating useful RIP matrices, the problem of certifying the RIP parameters of a matrix has become important. In this paper, we prove that it is hard to approximate the RIP parameters of a matrix assuming the Small-Set-Expansion-Hypothesis. Specifically, we prove that for any arbitrarily large constant C>0C>0 and any arbitrarily small constant 0<δ<10<\delta<1, there exists some kk such that given a matrix MM, it is SSE-Hard to distinguish the following two cases: - (Highly RIP) MM is (k,δ)(k,\delta)-RIP. - (Far away from RIP) MM is not (k/C,1−δ)(k/C, 1-\delta)-RIP. Most of the previous results on the topic of hardness of RIP certification only hold for certification when δ=o(1)\delta=o(1). In practice, it is of interest to understand the complexity of certifying a matrix with δ\delta being close to 2−1\sqrt{2}-1, as it suffices for many real applications to have matrices with δ=2−1\delta = \sqrt{2}-1. Our hardness result holds for any constant δ\delta. Specifically, our result proves that even if δ\delta is indeed very small, i.e. the matrix is in fact \emph{strongly RIP}, certifying that the matrix exhibits \emph{weak RIP} itself is SSE-Hard. In order to prove the hardness result, we prove a variant of the Cheeger's Inequality for sparse vectors

    The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing

    Full text link
    This paper deals with the computational complexity of conditions which guarantee that the NP-hard problem of finding the sparsest solution to an underdetermined linear system can be solved by efficient algorithms. In the literature, several such conditions have been introduced. The most well-known ones are the mutual coherence, the restricted isometry property (RIP), and the nullspace property (NSP). While evaluating the mutual coherence of a given matrix is easy, it has been suspected for some time that evaluating RIP and NSP is computationally intractable in general. We confirm these conjectures by showing that for a given matrix A and positive integer k, computing the best constants for which the RIP or NSP hold is, in general, NP-hard. These results are based on the fact that determining the spark of a matrix is NP-hard, which is also established in this paper. Furthermore, we also give several complexity statements about problems related to the above concepts.Comment: 13 pages; accepted for publication in IEEE Trans. Inf. Theor

    Optimal detection of sparse principal components in high dimension

    Full text link
    We perform a finite sample analysis of the detection levels for sparse principal components of a high-dimensional covariance matrix. Our minimax optimal test is based on a sparse eigenvalue statistic. Alas, computing this test is known to be NP-complete in general, and we describe a computationally efficient alternative test using convex relaxations. Our relaxation is also proved to detect sparse principal components at near optimal detection levels, and it performs well on simulated datasets. Moreover, using polynomial time reductions from theoretical computer science, we bring significant evidence that our results cannot be improved, thus revealing an inherent trade off between statistical and computational performance.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1127 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore