69 research outputs found

### Sharp RIP Bound for Sparse Signal and Low-Rank Matrix Recovery

This paper establishes a sharp condition on the restricted isometry property
(RIP) for both the sparse signal recovery and low-rank matrix recovery. It is
shown that if the measurement matrix $A$ satisfies the RIP condition
$\delta_k^A<1/3$, then all $k$-sparse signals $\beta$ can be recovered exactly
via the constrained $\ell_1$ minimization based on $y=A\beta$. Similarly, if
the linear map $\cal M$ satisfies the RIP condition $\delta_r^{\cal M}<1/3$,
then all matrices $X$ of rank at most $r$ can be recovered exactly via the
constrained nuclear norm minimization based on $b={\cal M}(X)$. Furthermore, in
both cases it is not possible to do so in general when the condition does not
hold. In addition, noisy cases are considered and oracle inequalities are given
under the sharp RIP condition.Comment: to appear in Applied and Computational Harmonic Analysis (2012

### Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-rank Matrices

This paper considers compressed sensing and affine rank minimization in both
noiseless and noisy cases and establishes sharp restricted isometry conditions
for sparse signal and low-rank matrix recovery. The analysis relies on a key
technical tool which represents points in a polytope by convex combinations of
sparse vectors. The technique is elementary while leads to sharp results.
It is shown that for any given constant $t\ge {4/3}$, in compressed sensing
$\delta_{tk}^A < \sqrt{(t-1)/t}$ guarantees the exact recovery of all $k$
sparse signals in the noiseless case through the constrained $\ell_1$
minimization, and similarly in affine rank minimization
$\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t}$ ensures the exact reconstruction of
all matrices with rank at most $r$ in the noiseless case via the constrained
nuclear norm minimization. Moreover, for any $\epsilon>0$,
$\delta_{tk}^A<\sqrt{\frac{t-1}{t}}+\epsilon$ is not sufficient to guarantee
the exact recovery of all $k$-sparse signals for large $k$. Similar result also
holds for matrix recovery. In addition, the conditions $\delta_{tk}^A <
\sqrt{(t-1)/t}$ and $\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t}$ are also shown to
be sufficient respectively for stable recovery of approximately sparse signals
and low-rank matrices in the noisy case.Comment: to appear in IEEE Transactions on Information Theor

### Inference for High-dimensional Differential Correlation Matrices

Motivated by differential co-expression analysis in genomics, we consider in
this paper estimation and testing of high-dimensional differential correlation
matrices. An adaptive thresholding procedure is introduced and theoretical
guarantees are given. Minimax rate of convergence is established and the
proposed estimator is shown to be adaptively rate-optimal over collections of
paired correlation matrices with approximately sparse differences. Simulation
results show that the procedure significantly outperforms two other natural
methods that are based on separate estimation of the individual correlation
matrices. The procedure is also illustrated through an analysis of a breast
cancer dataset, which provides evidence at the gene co-expression level that
several genes, of which a subset has been previously verified, are associated
with the breast cancer. Hypothesis testing on the differential correlation
matrices is also considered. A test, which is particularly well suited for
testing against sparse alternatives, is introduced. In addition, other related
problems, including estimation of a single sparse correlation matrix,
estimation of the differential covariance matrices, and estimation of the
differential cross-correlation matrices, are also discussed.Comment: Accepted for publication in Journal of Multivariate Analysi

### High-dimensional Statistical Inference: from Vector to Matrix

Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion).
The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, $\delta_k^A\u3c1/3$, $\delta_k^A+\theta_{k,k}^A \u3c1$, or $\delta_{tk}^A \u3c \sqrt{(t-1)/t}$ for any given constant $t\ge {4/3}$ guarantee the exact recovery of all $k$ sparse signals in the noiseless case through the constrained $\ell_1$ minimization, and similarly in affine rank minimization $\delta_r^\mathcal{M}\u3c1/3$, $\delta_r^{\mathcal{M}}+\theta_{r, r}^{\mathcal{M}}\u3c1$, or $\delta_{tr}^\mathcal{M}\u3c \sqrt{(t-1)/t}$ ensure the exact reconstruction of all matrices with rank at most $r$ in the noiseless case via the constrained nuclear norm minimization. Moreover, for any $\epsilon\u3e0$, $\delta_{k}^A \u3c 1/3+\epsilon$, $\delta_k^A+\theta_{k,k}^A\u3c1+\epsilon$, or $\delta_{tk}^A\u3c\sqrt{\frac{t-1}{t}}+\epsilon$ are not sufficient to guarantee the exact recovery of all $k$-sparse signals for large $k$. Similar result also holds for matrix recovery. In addition, the conditions $\delta_k^A\u3c1/3$, $\delta_k^A+\theta_{k,k}^A\u3c1$, $\delta_{tk}^A \u3c \sqrt{(t-1)/t}$ and $\delta_r^\mathcal{M}\u3c1/3$, $\delta_r^\mathcal{M}+\theta_{r,r}^\mathcal{M}\u3c1$, $\delta_{tr}^\mathcal{M}\u3c \sqrt{(t-1)/t}$ are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case.
For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections.
For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival

- â€¦