163 research outputs found
Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats
This paper is concerned with the approximation of tensors using tree-based
tensor formats, which are tensor networks whose graphs are dimension partition
trees. We consider Hilbert tensor spaces of multivariate functions defined on a
product set equipped with a probability measure. This includes the case of
multidimensional arrays corresponding to finite product sets. We propose and
analyse an algorithm for the construction of an approximation using only point
evaluations of a multivariate function, or evaluations of some entries of a
multidimensional array. The algorithm is a variant of higher-order singular
value decomposition which constructs a hierarchy of subspaces associated with
the different nodes of the tree and a corresponding hierarchy of interpolation
operators. Optimal subspaces are estimated using empirical principal component
analysis of interpolations of partial random evaluations of the function. The
algorithm is able to provide an approximation in any tree-based format with
either a prescribed rank or a prescribed relative error, with a number of
evaluations of the order of the storage complexity of the approximation format.
Under some assumptions on the estimation of principal components, we prove that
the algorithm provides either a quasi-optimal approximation with a given rank,
or an approximation satisfying the prescribed relative error, up to constants
depending on the tree and the properties of interpolation operators. The
analysis takes into account the discretization errors for the approximation of
infinite-dimensional tensors. Several numerical examples illustrate the main
results and the behavior of the algorithm for the approximation of
high-dimensional functions using hierarchical Tucker or tensor train tensor
formats, and the approximation of univariate functions using tensorization
Low-rank approximate inverse for preconditioning tensor-structured linear systems
In this paper, we propose an algorithm for the construction of low-rank
approximations of the inverse of an operator given in low-rank tensor format.
The construction relies on an updated greedy algorithm for the minimization of
a suitable distance to the inverse operator. It provides a sequence of
approximations that are defined as the projections of the inverse operator in
an increasing sequence of linear subspaces of operators. These subspaces are
obtained by the tensorization of bases of operators that are constructed from
successive rank-one corrections. In order to handle high-order tensors,
approximate projections are computed in low-rank Hierarchical Tucker subsets of
the successive subspaces of operators. Some desired properties such as symmetry
or sparsity can be imposed on the approximate inverse operator during the
correction step, where an optimal rank-one correction is searched as the tensor
product of operators with the desired properties. Numerical examples illustrate
the ability of this algorithm to provide efficient preconditioners for linear
systems in tensor format that improve the convergence of iterative solvers and
also the quality of the resulting low-rank approximations of the solution
Geometric Structures in Tensor Representations (Final Release)
The main goal of this paper is to study the geometric structures associated
with the representation of tensors in subspace based formats. To do this we use
a property of the so-called minimal subspaces which allows us to describe the
tensor representation by means of a rooted tree. By using the tree structure
and the dimensions of the associated minimal subspaces, we introduce, in the
underlying algebraic tensor space, the set of tensors in a tree-based format
with either bounded or fixed tree-based rank. This class contains the Tucker
format and the Hierarchical Tucker format (including the Tensor Train format).
In particular, we show that the set of tensors in the tree-based format with
bounded (respectively, fixed) tree-based rank of an algebraic tensor product of
normed vector spaces is an analytic Banach manifold. Indeed, the manifold
geometry for the set of tensors with fixed tree-based rank is induced by a
fibre bundle structure and the manifold geometry for the set of tensors with
bounded tree-based rank is given by a finite union of connected components. In
order to describe the relationship between these manifolds and the natural
ambient space, we introduce the definition of topological tensor spaces in the
tree-based format. We prove under natural conditions that any tensor of the
topological tensor space under consideration admits best approximations in the
manifold of tensors in the tree-based format with bounded tree-based rank. In
this framework, we also show that the tangent (Banach) space at a given tensor
is a complemented subspace in the natural ambient tensor Banach space and hence
the set of tensors in the tree-based format with bounded (respectively, fixed)
tree-based rank is an immersed submanifold. This fact allows us to extend the
Dirac-Frenkel variational principle in the framework of topological tensor
spaces.Comment: Some errors are corrected and Lemma 3.22 is improve
A tensor approximation method based on ideal minimal residual formulations for the solution of high-dimensional problems
In this paper, we propose a method for the approximation of the solution of
high-dimensional weakly coercive problems formulated in tensor spaces using
low-rank approximation formats. The method can be seen as a perturbation of a
minimal residual method with residual norm corresponding to the error in a
specified solution norm. We introduce and analyze an iterative algorithm that
is able to provide a controlled approximation of the optimal approximation of
the solution in a given low-rank subset, without any a priori information on
this solution. We also introduce a weak greedy algorithm which uses this
perturbed minimal residual method for the computation of successive greedy
corrections in small tensor subsets. We prove its convergence under some
conditions on the parameters of the algorithm. The residual norm can be
designed such that the resulting low-rank approximations are quasi-optimal with
respect to particular norms of interest, thus yielding to goal-oriented order
reduction strategies for the approximation of high-dimensional problems. The
proposed numerical method is applied to the solution of a stochastic partial
differential equation which is discretized using standard Galerkin methods in
tensor product spaces
Principal bundle structure of matrix manifolds
In this paper, we introduce a new geometric description of the manifolds of
matrices of fixed rank. The starting point is a geometric description of the
Grassmann manifold of linear subspaces of
dimension in which avoids the use of equivalence classes.
The set is equipped with an atlas which provides
it with the structure of an analytic manifold modelled on
. Then we define an atlas for the set
of full rank matrices and prove that
the resulting manifold is an analytic principal bundle with base
and typical fibre , the general
linear group of invertible matrices in . Finally, we
define an atlas for the set of
non-full rank matrices and prove that the resulting manifold is an analytic
principal bundle with base and typical fibre . The atlas of
is indexed on the manifold itself,
which allows a natural definition of a neighbourhood for a given matrix, this
neighbourhood being proved to possess the structure of a Lie group. Moreover,
the set equipped with the topology
induced by the atlas is proven to be an embedded submanifold of the matrix
space equipped with the subspace topology. The
proposed geometric description then results in a description of the matrix
space , seen as the union of manifolds
, as an analytic manifold equipped with
a topology for which the matrix rank is a continuous map
Tensor-based multiscale method for diffusion problems in quasi-periodic heterogeneous media
This paper proposes to address the issue of complexity reduction for the
numerical simulation of multiscale media in a quasi-periodic setting. We
consider a stationary elliptic diffusion equation defined on a domain such
that is the union of cells and we
introduce a two-scale representation by identifying any function defined
on with a bi-variate function , where relates to the
index of the cell containing the point and relates to a local
coordinate in a reference cell . We introduce a weak formulation of the
problem in a broken Sobolev space using a discontinuous Galerkin
framework. The problem is then interpreted as a tensor-structured equation by
identifying with a tensor product space of
functions defined over the product set . Tensor numerical methods
are then used in order to exploit approximability properties of quasi-periodic
solutions by low-rank tensors.Comment: Changed the choice of test spaces V(D) and X (with regard to
regularity) and the argumentation thereof. Corrected proof of proposition 3.
Corrected wrong multiplicative factor in proposition 4 and its proof (was 2
instead of 1). Added remark 6 at the end of section 2. Extended remark 7.
Added references. Some minor improvements (typos, typesetting
- …