9 research outputs found
On the matrix square root via geometric optimization
This paper is triggered by the preprint "\emph{Computing Matrix Squareroot
via Non Convex Local Search}" by Jain et al.
(\textit{\textcolor{blue}{arXiv:1507.05854}}), which analyzes gradient-descent
for computing the square root of a positive definite matrix. Contrary to claims
of~\citet{jain2015}, our experiments reveal that Newton-like methods compute
matrix square roots rapidly and reliably, even for highly ill-conditioned
matrices and without requiring commutativity. We observe that gradient-descent
converges very slowly primarily due to tiny step-sizes and ill-conditioning. We
derive an alternative first-order method based on geodesic convexity: our
method admits a transparent convergence analysis ( page), attains linear
rate, and displays reliable convergence even for rank deficient problems.
Though superior to gradient-descent, ultimately our method is also outperformed
by a well-known scaled Newton method. Nevertheless, the primary value of our
work is its conceptual value: it shows that for deriving gradient based methods
for the matrix square root, \emph{the manifold geometric view of positive
definite matrices can be much more advantageous than the Euclidean view}.Comment: 8 pages, 12 plots, this version contains several more references and
more words about the rank-deficient cas
A Backward Stable Algorithm for Computing the CS Decomposition via the Polar Decomposition
We introduce a backward stable algorithm for computing the CS decomposition
of a partitioned matrix with orthonormal columns, or a
rank-deficient partial isometry. The algorithm computes two polar
decompositions (which can be carried out in parallel) followed by an
eigendecomposition of a judiciously crafted Hermitian matrix. We
prove that the algorithm is backward stable whenever the aforementioned
decompositions are computed in a backward stable way. Since the polar
decomposition and the symmetric eigendecomposition are highly amenable to
parallelization, the algorithm inherits this feature. We illustrate this fact
by invoking recently developed algorithms for the polar decomposition and
symmetric eigendecomposition that leverage Zolotarev's best rational
approximations of the sign function. Numerical examples demonstrate that the
resulting algorithm for computing the CS decomposition enjoys excellent
numerical stability
The geometric mean of two matrices from a computational viewpoint
The geometric mean of two matrices is considered and analyzed from a
computational viewpoint. Some useful theoretical properties are derived and an
analysis of the conditioning is performed. Several numerical algorithms based
on different properties and representation of the geometric mean are discussed
and analyzed and it is shown that most of them can be classified in terms of
the rational approximations of the inverse square root functions. A review of
the relevant applications is given
Backward stability of iterations for computing the polar decomposition
Among the many iterations available for computing the polar decomposition the most practically useful are the scaled Newton iteration and the recently proposed dynamically weighted Halley iteration. Effective ways to scale these and other iterations are known, but their numerical stability is much less well understood. In this work we show that a general iteration for computing the unitary polar factor is backward stable under two conditions. The first condition requires that the iteration is implemented in a mixed backward--forward stable manner and the second requires that the mapping
does not significantly decrease the size of any singular value relative to the largest singular value.
Using this result we show that the dynamically weighted Halley iteration is backward stable when it is implemented using Householder QR factorization with column pivoting and either row pivoting or row sorting. We also prove the backward stability of the scaled Newton iteration under
the assumption that matrix inverses are computed in a mixed
backward-forward stable fashion; our proof is much shorter than a previous one of Kielbasinski and Zietak.
We also use our analysis to explain the instability of
the inverse Newton iteration and to show that the Newton-Schulz iteration is only conditionally stable.
This work shows that by carefully blending perturbation analysis with rounding error analysis it is possible to produce a general result that can prove the backward stability or predict or explain the instability (as the case may be) of a wide range of practically interesting iterations for the polar decomposition