869 research outputs found
Convergence analysis of Riemannian Gauss-Newton methods and its connection with the geometric condition number
We obtain estimates of the multiplicative constants appearing in local
convergence results of the Riemannian Gauss-Newton method for least squares
problems on manifolds and relate them to the geometric condition number of [P.
B\"urgisser and F. Cucker, Condition: The Geometry of Numerical Algorithms,
2013]
Tensor Completion via Tensor Train Based Low-Rank Quotient Geometry under a Preconditioned Metric
This paper investigates the low-rank tensor completion problem, which is
about recovering a tensor from partially observed entries. We consider this
problem in the tensor train format and extend the preconditioned metric from
the matrix case to the tensor case. The first-order and second-order quotient
geometry of the manifold of fixed tensor train rank tensors under this metric
is studied in detail. Algorithms, including Riemannian gradient descent,
Riemannian conjugate gradient, and Riemannian Gauss-Newton, have been proposed
for the tensor completion problem based on the quotient geometry. It has also
been shown that the Riemannian Gauss-Newton method on the quotient geometry is
equivalent to the Riemannian Gauss-Newton method on the embedded geometry with
a specific retraction. Empirical evaluations on random instances as well as on
function-related tensors show that the proposed algorithms are competitive with
other existing algorithms in terms of recovery ability, convergence
performance, and reconstruction quality.Comment: The manuscript has been adjusted in several place
A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem
The canonical tensor rank approximation problem (TAP) consists of
approximating a real-valued tensor by one of low canonical rank, which is a
challenging non-linear, non-convex, constrained optimization problem, where the
constraint set forms a non-smooth semi-algebraic set. We introduce a Riemannian
Gauss-Newton method with trust region for solving small-scale, dense TAPs. The
novelty of our approach is threefold. First, we parametrize the constraint set
as the Cartesian product of Segre manifolds, hereby formulating the TAP as a
Riemannian optimization problem, and we argue why this parametrization is among
the theoretically best possible. Second, an original ST-HOSVD-based retraction
operator is proposed. Third, we introduce a hot restart mechanism that
efficiently detects when the optimization process is tending to an
ill-conditioned tensor rank decomposition and which often yields a quick escape
path from such spurious decompositions. Numerical experiments show improvements
of up to three orders of magnitude in terms of the expected time to compute a
successful solution over existing state-of-the-art methods
The geometric mean of two matrices from a computational viewpoint
The geometric mean of two matrices is considered and analyzed from a
computational viewpoint. Some useful theoretical properties are derived and an
analysis of the conditioning is performed. Several numerical algorithms based
on different properties and representation of the geometric mean are discussed
and analyzed and it is shown that most of them can be classified in terms of
the rational approximations of the inverse square root functions. A review of
the relevant applications is given
Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure
The numerical solution of partial differential equations on high-dimensional
domains gives rise to computationally challenging linear systems. When using
standard discretization techniques, the size of the linear system grows
exponentially with the number of dimensions, making the use of classic
iterative solvers infeasible. During the last few years, low-rank tensor
approaches have been developed that allow to mitigate this curse of
dimensionality by exploiting the underlying structure of the linear operator.
In this work, we focus on tensors represented in the Tucker and tensor train
formats. We propose two preconditioned gradient methods on the corresponding
low-rank tensor manifolds: A Riemannian version of the preconditioned
Richardson method as well as an approximate Newton scheme based on the
Riemannian Hessian. For the latter, considerable attention is given to the
efficient solution of the resulting Newton equation. In numerical experiments,
we compare the efficiency of our Riemannian algorithms with other established
tensor-based approaches such as a truncated preconditioned Richardson method
and the alternating linear scheme. The results show that our approximate
Riemannian Newton scheme is significantly faster in cases when the application
of the linear operator is expensive.Comment: 24 pages, 8 figure
Recursive Importance Sketching for Rank Constrained Least Squares: Algorithms and High-order Convergence
In this paper, we propose a new {\it \underline{R}ecursive} {\it
\underline{I}mportance} {\it \underline{S}ketching} algorithm for {\it
\underline{R}ank} constrained least squares {\it \underline{O}ptimization}
(RISRO). As its name suggests, the algorithm is based on a new sketching
framework, recursive importance sketching. Several existing algorithms in the
literature can be reinterpreted under the new sketching framework and RISRO
offers clear advantages over them. RISRO is easy to implement and
computationally efficient, where the core procedure in each iteration is only
solving a dimension reduced least squares problem. Different from numerous
existing algorithms with locally geometric convergence rate, we establish the
local quadratic-linear and quadratic rate of convergence for RISRO under some
mild conditions. In addition, we discover a deep connection of RISRO to
Riemannian manifold optimization on fixed rank matrices. The effectiveness of
RISRO is demonstrated in two applications in machine learning and statistics:
low-rank matrix trace regression and phase retrieval. Simulation studies
demonstrate the superior numerical performance of RISRO
Low-rank Tensor Estimation via Riemannian Gauss-Newton: Statistical Optimality and Second-Order Convergence
In this paper, we consider the estimation of a low Tucker rank tensor from a
number of noisy linear measurements. The general problem covers many specific
examples arising from applications, including tensor regression, tensor
completion, and tensor PCA/SVD. We consider an efficient Riemannian
Gauss-Newton (RGN) method for low Tucker rank tensor estimation. Different from
the generic (super)linear convergence guarantee of RGN in the literature, we
prove the first local quadratic convergence guarantee of RGN for low-rank
tensor estimation in the noisy setting under some regularity conditions and
provide the corresponding estimation error upper bounds. A deterministic
estimation error lower bound, which matches the upper bound, is provided that
demonstrates the statistical optimality of RGN. The merit of RGN is illustrated
through two machine learning applications: tensor regression and tensor SVD.
Finally, we provide the simulation results to corroborate our theoretical
findings
- …