1,010 research outputs found
Guarantees of Riemannian Optimization for Low Rank Matrix Completion
We study the Riemannian optimization methods on the embedded manifold of low
rank matrices for the problem of matrix completion, which is about recovering a
low rank matrix from its partial entries. Assume entries of an
rank matrix are sampled independently and uniformly with replacement. We
first prove that with high probability the Riemannian gradient descent and
conjugate gradient descent algorithms initialized by one step hard thresholding
are guaranteed to converge linearly to the measured matrix provided
\begin{align*} m\geq C_\kappa n^{1.5}r\log^{1.5}(n), \end{align*} where
is a numerical constant depending on the condition number of the
underlying matrix. The sampling complexity has been further improved to
\begin{align*} m\geq C_\kappa nr^2\log^{2}(n) \end{align*} via the resampled
Riemannian gradient descent initialization. The analysis of the new
initialization procedure relies on an asymmetric restricted isometry property
of the sampling operator and the curvature of the low rank matrix manifold.
Numerical simulation shows that the algorithms are able to recover a low rank
matrix from nearly the minimum number of measurements
Blind Demixing for Low-Latency Communication
In the next generation wireless networks, lowlatency communication is
critical to support emerging diversified applications, e.g., Tactile Internet
and Virtual Reality. In this paper, a novel blind demixing approach is
developed to reduce the channel signaling overhead, thereby supporting
low-latency communication. Specifically, we develop a low-rank approach to
recover the original information only based on a single observed vector without
any channel estimation. Unfortunately, this problem turns out to be a highly
intractable non-convex optimization problem due to the multiple non-convex
rankone constraints. To address the unique challenges, the quotient manifold
geometry of product of complex asymmetric rankone matrices is exploited by
equivalently reformulating original complex asymmetric matrices to the
Hermitian positive semidefinite matrices. We further generalize the geometric
concepts of the complex product manifolds via element-wise extension of the
geometric concepts of the individual manifolds. A scalable Riemannian
trust-region algorithm is then developed to solve the blind demixing problem
efficiently with fast convergence rates and low iteration cost. Numerical
results will demonstrate the algorithmic advantages and admirable performance
of the proposed algorithm compared with the state-of-art methods.Comment: 14 pages, accepted by IEEE Transaction on Wireless Communicatio
A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints
We propose a new algorithm to solve optimization problems of the form for a smooth function under the constraints that is positive
semidefinite and the diagonal blocks of are small identity matrices. Such
problems often arise as the result of relaxing a rank constraint (lifting). In
particular, many estimation tasks involving phases, rotations, orthonormal
bases or permutations fit in this framework, and so do certain relaxations of
combinatorial problems such as Max-Cut. The proposed algorithm exploits the
facts that (1) such formulations admit low-rank solutions, and (2) their
rank-restricted versions are smooth optimization problems on a Riemannian
manifold. Combining insights from both the Riemannian and the convex geometries
of the problem, we characterize when second-order critical points of the smooth
problem reveal KKT points of the semidefinite problem. We compare against state
of the art, mature software and find that, on certain interesting problem
instances, what we call the staircase method is orders of magnitude faster, is
more accurate and scales better. Code is available.Comment: 37 pages, 3 figure
Riemannian Optimization for Skip-Gram Negative Sampling
Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its
implementation in "word2vec" software, is usually optimized by stochastic
gradient descent. However, the optimization of SGNS objective can be viewed as
a problem of searching for a good matrix with the low-rank constraint. The most
standard way to solve this type of problems is to apply Riemannian optimization
framework to optimize the SGNS objective over the manifold of required low-rank
matrices. In this paper, we propose an algorithm that optimizes SGNS objective
using Riemannian optimization and demonstrates its superiority over popular
competitors, such as the original method to train SGNS and SVD over SPPMI
matrix.Comment: 9 pages, 4 figures, ACL 201
Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion
A spectrally sparse signal of order is a mixture of damped or
undamped complex sinusoids. This paper investigates the problem of
reconstructing spectrally sparse signals from a random subset of regular
time domain samples, which can be reformulated as a low rank Hankel matrix
completion problem. We introduce an iterative hard thresholding (IHT) algorithm
and a fast iterative hard thresholding (FIHT) algorithm for efficient
reconstruction of spectrally sparse signals via low rank Hankel matrix
completion. Theoretical recovery guarantees have been established for FIHT,
showing that number of samples are sufficient for exact
recovery with high probability. Empirical performance comparisons establish
significant computational advantages for IHT and FIHT. In particular, numerical
simulations on D arrays demonstrate the capability of FIHT on handling large
and high-dimensional real data
- …