7 research outputs found

    High dimensional linear regression using lattice basis reduction

    No full text
    © 2018 Curran Associates Inc.All rights reserved. We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector β∗ from n noisy linear observations Y = Xβ∗ + W ∈ Rn, for known X ∈ Rn×p and unknown W ∈ Rn. Unlike most of the literature on this model we make no sparsity assumption on β∗. Instead we adopt a regularization based on assuming that the underlying vectors β∗ have rational entries with the same denominator Q ∈ Z>0. We call this Q-rationality assumption. We propose a new polynomial-time algorithm for this task which is based on the seminal Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm. We establish that under the Q-rationality assumption, our algorithm recovers exactly the vector β∗ for a large class of distributions for the iid entries of X and non-zero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n = 1). Furthermore, we prove that in the case of the Gaussian white noise for W, n = o(p/log p) and Q sufficiently large, our algorithm tolerates a nearly optimal information-theoretic level of the noise

    High dimensional linear regression using lattice basis reduction

    No full text
    © 2018 Curran Associates Inc.All rights reserved. We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector β∗ from n noisy linear observations Y = Xβ∗ + W ∈ Rn, for known X ∈ Rn×p and unknown W ∈ Rn. Unlike most of the literature on this model we make no sparsity assumption on β∗. Instead we adopt a regularization based on assuming that the underlying vectors β∗ have rational entries with the same denominator Q ∈ Z>0. We call this Q-rationality assumption. We propose a new polynomial-time algorithm for this task which is based on the seminal Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm. We establish that under the Q-rationality assumption, our algorithm recovers exactly the vector β∗ for a large class of distributions for the iid entries of X and non-zero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n = 1). Furthermore, we prove that in the case of the Gaussian white noise for W, n = o(p/log p) and Q sufficiently large, our algorithm tolerates a nearly optimal information-theoretic level of the noise
    corecore