41 research outputs found
Tensor completion in hierarchical tensor representations
Compressed sensing extends from the recovery of sparse vectors from
undersampled measurements via efficient algorithms to the recovery of matrices
of low rank from incomplete information. Here we consider a further extension
to the reconstruction of tensors of low multi-linear rank in recently
introduced hierarchical tensor formats from a small number of measurements.
Hierarchical tensors are a flexible generalization of the well-known Tucker
representation, which have the advantage that the number of degrees of freedom
of a low rank tensor does not scale exponentially with the order of the tensor.
While corresponding tensor decompositions can be computed efficiently via
successive applications of (matrix) singular value decompositions, some
important properties of the singular value decomposition do not extend from the
matrix to the tensor case. This results in major computational and theoretical
difficulties in designing and analyzing algorithms for low rank tensor
recovery. For instance, a canonical analogue of the tensor nuclear norm is
NP-hard to compute in general, which is in stark contrast to the matrix case.
In this book chapter we consider versions of iterative hard thresholding
schemes adapted to hierarchical tensor formats. A variant builds on methods
from Riemannian optimization and uses a retraction mapping from the tangent
space of the manifold of low rank tensors back to this manifold. We provide
first partial convergence results based on a tensor version of the restricted
isometry property (TRIP) of the measurement map. Moreover, an estimate of the
number of measurements is provided that ensures the TRIP of a given tensor rank
with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its
Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral
Hermite interpolation with retractions on manifolds
Interpolation of data on non-Euclidean spaces is an active research area
fostered by its numerous applications. This work considers the Hermite
interpolation problem: finding a sufficiently smooth manifold curve that
interpolates a collection of data points on a Riemannian manifold while
matching a prescribed derivative at each point. We propose a novel procedure
relying on the general concept of retractions to solve this problem on a large
class of manifolds, including those for which computing the Riemannian
exponential or logarithmic maps is not straightforward, such as the manifold of
fixed-rank matrices. We analyze the well-posedness of the method by introducing
and showing the existence of retraction-convex sets, a generalization of
geodesically convex sets. We extend to the manifold setting a classical result
on the asymptotic interpolation error of Hermite interpolation. We finally
illustrate these results and the effectiveness of the method with numerical
experiments on the manifold of fixed-rank matrices and the Stiefel manifold of
matrices with orthonormal columns
Singularities and stable homotopy groups of spheres II
We establish a connection between Morin singularities and stable homotopy
groups of spheres. This connection allows us to describe how the images of
singularity strata behave around the image of a more complicated stratum.Comment: 31 pages, submitted to Journal of Singularitie
An introduction to Lie group integrators -- basics, new developments and applications
We give a short and elementary introduction to Lie group methods. A selection
of applications of Lie group integrators are discussed. Finally, a family of
symplectic integrators on cotangent bundles of Lie groups is presented and the
notion of discrete gradient methods is generalised to Lie groups
Generative Modelling with Tensor Train approximations of Hamilton--Jacobi--Bellman equations
Sampling from probability densities is a common challenge in fields such as
Uncertainty Quantification (UQ) and Generative Modelling (GM). In GM in
particular, the use of reverse-time diffusion processes depending on the
log-densities of Ornstein-Uhlenbeck forward processes are a popular sampling
tool. In Berner et al. [2022] the authors point out that these log-densities
can be obtained by solution of a \textit{Hamilton-Jacobi-Bellman} (HJB)
equation known from stochastic optimal control. While this HJB equation is
usually treated with indirect methods such as policy iteration and unsupervised
training of black-box architectures like Neural Networks, we propose instead to
solve the HJB equation by direct time integration, using compressed polynomials
represented in the Tensor Train (TT) format for spatial discretization.
Crucially, this method is sample-free, agnostic to normalization constants and
can avoid the curse of dimensionality due to the TT compression. We provide a
complete derivation of the HJB equation's action on Tensor Train polynomials
and demonstrate the performance of the proposed time-step-, rank- and
degree-adaptive integration method on a nonlinear sampling task in 20
dimensions
A Feasible Method for Optimization with Orthogonality Constraints
Minimization with orthogonality constraints (e.g., X'X = I) and/or spherical constraints (e.g., ||x||_2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we propose to use a Crank-Nicholson-like update scheme to preserve the constraints and based on it, develop curvilinear search algorithms with lower per-iteration cost compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their state-of-the-art algorithms. For the quadratic assignment problem, a gap 0.842% to the best known solution on the largest problem "256c" in QAPLIB can be reached in 5 minutes on a typical laptop