Recent research has studied the role of sparsity in high dimensional
regression and signal reconstruction, establishing theoretical limits for
recovering sparse models from sparse data. This line of work shows that
ℓ1​-regularized least squares regression can accurately estimate a sparse
linear model from n noisy examples in p dimensions, even if p is much
larger than n. In this paper we study a variant of this problem where the
original n input variables are compressed by a random linear transformation
to m≪n examples in p dimensions, and establish conditions under which a
sparse linear model can be successfully recovered from the compressed data. A
primary motivation for this compression procedure is to anonymize the data and
preserve privacy by revealing little information about the original data. We
characterize the number of random projections that are required for
ℓ1​-regularized compressed regression to identify the nonzero coefficients
in the true model with probability approaching one, a property called
``sparsistence.'' In addition, we show that ℓ1​-regularized compressed
regression asymptotically predicts as well as an oracle linear model, a
property called ``persistence.'' Finally, we characterize the privacy
properties of the compression procedure in information-theoretic terms,
establishing upper bounds on the mutual information between the compressed and
uncompressed data that decay to zero.Comment: 59 pages, 5 figure, Submitted for revie