24,055 research outputs found
Low rank tensor recovery via iterative hard thresholding
We study extensions of compressive sensing and low rank matrix recovery
(matrix completion) to the recovery of low rank tensors of higher order from a
small number of linear measurements. While the theoretical understanding of low
rank matrix recovery is already well-developed, only few contributions on the
low rank tensor recovery problem are available so far. In this paper, we
introduce versions of the iterative hard thresholding algorithm for several
tensor decompositions, namely the higher order singular value decomposition
(HOSVD), the tensor train format (TT), and the general hierarchical Tucker
decomposition (HT). We provide a partial convergence result for these
algorithms which is based on a variant of the restricted isometry property of
the measurement operator adapted to the tensor decomposition at hand that
induces a corresponding notion of tensor rank. We show that subgaussian
measurement ensembles satisfy the tensor restricted isometry property with high
probability under a certain almost optimal bound on the number of measurements
which depends on the corresponding tensor format. These bounds are extended to
partial Fourier maps combined with random sign flips of the tensor entries.
Finally, we illustrate the performance of iterative hard thresholding methods
for tensor recovery via numerical experiments where we consider recovery from
Gaussian random measurements, tensor completion (recovery of missing entries),
and Fourier measurements for third order tensors.Comment: 34 page
On Iterative Hard Thresholding Methods for High-dimensional M-Estimation
The use of M-estimators in generalized linear regression models in high
dimensional settings requires risk minimization with hard constraints. Of
the known methods, the class of projected gradient descent (also known as
iterative hard thresholding (IHT)) methods is known to offer the fastest and
most scalable solutions. However, the current state-of-the-art is only able to
analyze these methods in extremely restrictive settings which do not hold in
high dimensional statistical models. In this work we bridge this gap by
providing the first analysis for IHT-style methods in the high dimensional
statistical setting. Our bounds are tight and match known minimax lower bounds.
Our results rely on a general analysis framework that enables us to analyze
several popular hard thresholding style algorithms (such as HTP, CoSaMP, SP) in
the high dimensional regression setting. We also extend our analysis to a large
family of "fully corrective methods" that includes two-stage and partial
hard-thresholding algorithms. We show that our results hold for the problem of
sparse regression, as well as low-rank matrix recovery.Comment: 20 pages, 3 figures, To appear in the proceedings of the 28th Annual
Conference on Neural Information Processing Systems, NIPS 201
Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms
This paper treats the problem of minimizing a general continuously
differentiable function subject to sparsity constraints. We present and analyze
several different optimality criteria which are based on the notions of
stationarity and coordinate-wise optimality. These conditions are then used to
derive three numerical algorithms aimed at finding points satisfying the
resulting optimality criteria: the iterative hard thresholding method and the
greedy and partial sparse-simplex methods. The first algorithm is essentially a
gradient projection method while the remaining two algorithms are of coordinate
descent type. The theoretical convergence of these methods and their relations
to the derived optimality conditions are studied. The algorithms and results
are illustrated by several numerical examples.Comment: submitted to SIAM Optimizatio
Reliable recovery of hierarchically sparse signals for Gaussian and Kronecker product measurements
We propose and analyze a solution to the problem of recovering a block sparse
signal with sparse blocks from linear measurements. Such problems naturally
emerge inter alia in the context of mobile communication, in order to meet the
scalability and low complexity requirements of massive antenna systems and
massive machine-type communication. We introduce a new variant of the Hard
Thresholding Pursuit (HTP) algorithm referred to as HiHTP. We provide both a
proof of convergence and a recovery guarantee for noisy Gaussian measurements
that exhibit an improved asymptotic scaling in terms of the sampling complexity
in comparison with the usual HTP algorithm. Furthermore, hierarchically sparse
signals and Kronecker product structured measurements naturally arise together
in a variety of applications. We establish the efficient reconstruction of
hierarchically sparse signals from Kronecker product measurements using the
HiHTP algorithm. Additionally, we provide analytical results that connect our
recovery conditions to generalized coherence measures. Again, our recovery
results exhibit substantial improvement in the asymptotic sampling complexity
scaling over the standard setting. Finally, we validate in numerical
experiments that for hierarchically sparse signals, HiHTP performs
significantly better compared to HTP.Comment: 11+4 pages, 5 figures. V3: Incomplete funding information corrected
and minor typos corrected. V4: Change of title and additional author Axel
Flinth. Included new results on Kronecker product measurements and relations
of HiRIP to hierarchical coherence measures. Improved presentation of general
hierarchically sparse signals and correction of minor typo
Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion
A spectrally sparse signal of order is a mixture of damped or
undamped complex sinusoids. This paper investigates the problem of
reconstructing spectrally sparse signals from a random subset of regular
time domain samples, which can be reformulated as a low rank Hankel matrix
completion problem. We introduce an iterative hard thresholding (IHT) algorithm
and a fast iterative hard thresholding (FIHT) algorithm for efficient
reconstruction of spectrally sparse signals via low rank Hankel matrix
completion. Theoretical recovery guarantees have been established for FIHT,
showing that number of samples are sufficient for exact
recovery with high probability. Empirical performance comparisons establish
significant computational advantages for IHT and FIHT. In particular, numerical
simulations on D arrays demonstrate the capability of FIHT on handling large
and high-dimensional real data
Post-selection point and interval estimation of signal sizes in Gaussian samples
We tackle the problem of the estimation of a vector of means from a single
vector-valued observation . Whereas previous work reduces the size of the
estimates for the largest (absolute) sample elements via shrinkage (like
James-Stein) or biases estimated via empirical Bayes methodology, we take a
novel approach. We adapt recent developments by Lee et al (2013) in post
selection inference for the Lasso to the orthogonal setting, where sample
elements have different underlying signal sizes. This is exactly the setup
encountered when estimating many means. It is shown that other selection
procedures, like selecting the largest (absolute) sample elements and the
Benjamini-Hochberg procedure, can be cast into their framework, allowing us to
leverage their results. Point and interval estimates for signal sizes are
proposed. These seem to perform quite well against competitors, both recent and
more tenured.
Furthermore, we prove an upper bound to the worst case risk of our estimator,
when combined with the Benjamini-Hochberg procedure, and show that it is within
a constant multiple of the minimax risk over a rich set of parameter spaces
meant to evoke sparsity.Comment: 27 pages, 13 figure
- …