10,101 research outputs found
Finding a low-rank basis in a matrix subspace
For a given matrix subspace, how can we find a basis that consists of
low-rank matrices? This is a generalization of the sparse vector problem. It
turns out that when the subspace is spanned by rank-1 matrices, the matrices
can be obtained by the tensor CP decomposition. For the higher rank case, the
situation is not as straightforward. In this work we present an algorithm based
on a greedy process applicable to higher rank problems. Our algorithm first
estimates the minimum rank by applying soft singular value thresholding to a
nuclear norm relaxation, and then computes a matrix with that rank using the
method of alternating projections. We provide local convergence results, and
compare our algorithm with several alternative approaches. Applications include
data compression beyond the classical truncated SVD, computing accurate
eigenvectors of a near-multiple eigenvalue, image separation and graph
Laplacian eigenproblems
A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem
The canonical tensor rank approximation problem (TAP) consists of
approximating a real-valued tensor by one of low canonical rank, which is a
challenging non-linear, non-convex, constrained optimization problem, where the
constraint set forms a non-smooth semi-algebraic set. We introduce a Riemannian
Gauss-Newton method with trust region for solving small-scale, dense TAPs. The
novelty of our approach is threefold. First, we parametrize the constraint set
as the Cartesian product of Segre manifolds, hereby formulating the TAP as a
Riemannian optimization problem, and we argue why this parametrization is among
the theoretically best possible. Second, an original ST-HOSVD-based retraction
operator is proposed. Third, we introduce a hot restart mechanism that
efficiently detects when the optimization process is tending to an
ill-conditioned tensor rank decomposition and which often yields a quick escape
path from such spurious decompositions. Numerical experiments show improvements
of up to three orders of magnitude in terms of the expected time to compute a
successful solution over existing state-of-the-art methods
Characterizing Distances of Networks on the Tensor Manifold
At the core of understanding dynamical systems is the ability to maintain and
control the systems behavior that includes notions of robustness,
heterogeneity, or regime-shift detection. Recently, to explore such functional
properties, a convenient representation has been to model such dynamical
systems as a weighted graph consisting of a finite, but very large number of
interacting agents. This said, there exists very limited relevant statistical
theory that is able cope with real-life data, i.e., how does perform analysis
and/or statistics over a family of networks as opposed to a specific network or
network-to-network variation. Here, we are interested in the analysis of
network families whereby each network represents a point on an underlying
statistical manifold. To do so, we explore the Riemannian structure of the
tensor manifold developed by Pennec previously applied to Diffusion Tensor
Imaging (DTI) towards the problem of network analysis. In particular, while
this note focuses on Pennec definition of geodesics amongst a family of
networks, we show how it lays the foundation for future work for developing
measures of network robustness for regime-shift detection. We conclude with
experiments highlighting the proposed distance on synthetic networks and an
application towards biological (stem-cell) systems.Comment: This paper is accepted at 8th International Conference on Complex
Networks 201
- …