272 research outputs found
Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats
This paper is concerned with the approximation of tensors using tree-based
tensor formats, which are tensor networks whose graphs are dimension partition
trees. We consider Hilbert tensor spaces of multivariate functions defined on a
product set equipped with a probability measure. This includes the case of
multidimensional arrays corresponding to finite product sets. We propose and
analyse an algorithm for the construction of an approximation using only point
evaluations of a multivariate function, or evaluations of some entries of a
multidimensional array. The algorithm is a variant of higher-order singular
value decomposition which constructs a hierarchy of subspaces associated with
the different nodes of the tree and a corresponding hierarchy of interpolation
operators. Optimal subspaces are estimated using empirical principal component
analysis of interpolations of partial random evaluations of the function. The
algorithm is able to provide an approximation in any tree-based format with
either a prescribed rank or a prescribed relative error, with a number of
evaluations of the order of the storage complexity of the approximation format.
Under some assumptions on the estimation of principal components, we prove that
the algorithm provides either a quasi-optimal approximation with a given rank,
or an approximation satisfying the prescribed relative error, up to constants
depending on the tree and the properties of interpolation operators. The
analysis takes into account the discretization errors for the approximation of
infinite-dimensional tensors. Several numerical examples illustrate the main
results and the behavior of the algorithm for the approximation of
high-dimensional functions using hierarchical Tucker or tensor train tensor
formats, and the approximation of univariate functions using tensorization
Méthodes spectrales stochastiques et réduction de modèle pour la propagation d'incertitudes paramétriques dans les modèles physiques
National audienceLa quantification des incertitudes apparaît comme une voie essentielle pour l'amélioration de la prédictibilité des modèles physiques
Geometric Structures in Tensor Representations (Final Release)
The main goal of this paper is to study the geometric structures associated
with the representation of tensors in subspace based formats. To do this we use
a property of the so-called minimal subspaces which allows us to describe the
tensor representation by means of a rooted tree. By using the tree structure
and the dimensions of the associated minimal subspaces, we introduce, in the
underlying algebraic tensor space, the set of tensors in a tree-based format
with either bounded or fixed tree-based rank. This class contains the Tucker
format and the Hierarchical Tucker format (including the Tensor Train format).
In particular, we show that the set of tensors in the tree-based format with
bounded (respectively, fixed) tree-based rank of an algebraic tensor product of
normed vector spaces is an analytic Banach manifold. Indeed, the manifold
geometry for the set of tensors with fixed tree-based rank is induced by a
fibre bundle structure and the manifold geometry for the set of tensors with
bounded tree-based rank is given by a finite union of connected components. In
order to describe the relationship between these manifolds and the natural
ambient space, we introduce the definition of topological tensor spaces in the
tree-based format. We prove under natural conditions that any tensor of the
topological tensor space under consideration admits best approximations in the
manifold of tensors in the tree-based format with bounded tree-based rank. In
this framework, we also show that the tangent (Banach) space at a given tensor
is a complemented subspace in the natural ambient tensor Banach space and hence
the set of tensors in the tree-based format with bounded (respectively, fixed)
tree-based rank is an immersed submanifold. This fact allows us to extend the
Dirac-Frenkel variational principle in the framework of topological tensor
spaces.Comment: Some errors are corrected and Lemma 3.22 is improve
Projection based model order reduction methods for the estimation of vector-valued variables of interest
We propose and compare goal-oriented projection based model order reduction
methods for the estimation of vector-valued functionals of the solution of
parameter-dependent equations. The first projection method is a generalization
of the classical primal-dual method to the case of vector-valued variables of
interest. We highlight the role played by three reduced spaces: the
approximation space and the test space associated to the primal variable, and
the approximation space associated to the dual variable. Then we propose a
Petrov-Galerkin projection method based on a saddle point problem involving an
approximation space for the primal variable and an approximation space for an
auxiliary variable. A goal-oriented choice of the latter space, defined as the
sum of two spaces, allows us to improve the approximation of the variable of
interest compared to a primal-dual method using the same reduced spaces. Then,
for both approaches, we derive computable error estimates for the
approximations of the variable of interest and we propose greedy algorithms for
the goal-oriented construction of reduced spaces. The performance of the
algorithms are illustrated on numerical examples and compared to standard (non
goal-oriented) algorithms
Principal bundle structure of matrix manifolds
In this paper, we introduce a new geometric description of the manifolds of
matrices of fixed rank. The starting point is a geometric description of the
Grassmann manifold of linear subspaces of
dimension in which avoids the use of equivalence classes.
The set is equipped with an atlas which provides
it with the structure of an analytic manifold modelled on
. Then we define an atlas for the set
of full rank matrices and prove that
the resulting manifold is an analytic principal bundle with base
and typical fibre , the general
linear group of invertible matrices in . Finally, we
define an atlas for the set of
non-full rank matrices and prove that the resulting manifold is an analytic
principal bundle with base and typical fibre . The atlas of
is indexed on the manifold itself,
which allows a natural definition of a neighbourhood for a given matrix, this
neighbourhood being proved to possess the structure of a Lie group. Moreover,
the set equipped with the topology
induced by the atlas is proven to be an embedded submanifold of the matrix
space equipped with the subspace topology. The
proposed geometric description then results in a description of the matrix
space , seen as the union of manifolds
, as an analytic manifold equipped with
a topology for which the matrix rank is a continuous map
Tensor-based numerical method for stochastic homogenisation
This paper addresses the complexity reduction of stochastic homogenisation of
a class of random materials for a stationary diffusion equation. A
cost-efficient approximation of the correctors is built using a method designed
to exploit quasi-periodicity. Accuracy and cost reduction are investigated for
local perturbations or small transformations of periodic materials as well as
for materials with no periodicity but a mesoscopic structure, for which the
limitations of the method are shown. Finally, for materials outside the scope
of this method, we propose to use the approximation of homogenised quantities
as control variates for variance reduction of a more accurate and costly Monte
Carlo estimator (using a multi-fidelity Monte Carlo method). The resulting cost
reduction is illustrated in a numerical experiment with a control variate from
weakly stochastic homogenisation for comparison, and the limits of this
variance reduction technique are tested on materials without periodicity or
mesoscopic structure
Dynamical model reduction method for solving parameter-dependent dynamical systems
We propose a projection-based model order reduction method for the solution
of parameter-dependent dynamical systems. The proposed method relies on the
construction of time-dependent reduced spaces generated from evaluations of the
solution of the full-order model at some selected parameters values. The
approximation obtained by Galerkin projection is the solution of a reduced
dynamical system with a modified flux which takes into account the time
dependency of the reduced spaces. An a posteriori error estimate is derived and
a greedy algorithm using this error estimate is proposed for the adaptive
selection of parameters values. The resulting method can be interpreted as a
dynamical low-rank approximation method with a subspace point of view and a
uniform control of the error over the parameter set
Recent developments in spectral stochastic methods for the numerical solution of stochastic partial differential equations
International audienceUncertainty quantification appears today as a crucial point in numerous branches of science and engineering. In the last two decades, a growing interest has been devoted to a new family of methods, called spectral stochastic methods, for the propagation of uncertainties through physical models governed by stochastic partial differential equations. These approaches rely on a fruitful marriage of probability theory and approximation theory in functional analysis. This paper provide a review of some recent developments in computational stochastic methods, with a particular emphasis on spectral stochastic approaches. After a review of different choices for the functional representation of random variables, we provide an overview of various numerical methods for the computation of these functional representations: projection, collocation, Galerkin approaches... A detailed presentation of Galerkin-type spectral stochastic approaches and related computational issues is provided. Recent developments on model reduction techniques in the context of spectral stochastic methods are also discussed. The aim of these techniques is to circumvent several drawbacks of spectral stochastic approaches (computing time, memory requirements, intrusive character) and to allow their use for large scale applications. We particularly focus on model reduction techniques based on spectral decomposition techniques and their generalization
A generalized spectral decomposition technique to solve a class of linear stochastic partial differential equations
International audienceWe propose a new robust technique for solving stochastic partial differential equations. The solution is approximated by a series of terms, each of which being the product of a scalar stochastic function by a deterministic function. None of these functions are fixed a priori but determined by solving a problem which can be interpreted as an "extended" eigenvalue problem. This technique generalizes the classical spectral decomposition, namely the Karhunen-Loeve expansion. Ad-hoc iterative techniques to build the approximation, inspired by the power method for classical eigenproblems, then transform the problem into the resolution of a few uncoupled deterministic problems and stochastic equations. This method drastically reduces the calculation costs and memory requirements of classical resolution techniques used in the context of Galerkin stochastic finite element methods. Finally, this technique is particularly suitable to non-linear and evolution problems since it enables the construction of a relevant reduced basis of deterministic functions which can be efficiently reused for subsequent resolutions
A Proper Generalized Decomposition for the solution of elliptic problems in abstract form by using a functional Eckart-Young approach
International audienceThe Proper Generalized Decomposition (PGD) is a methodology initially proposed for the solution of partial dierential equations (PDE) dened in tensor product spaces. It consists in constructing a separated representation of the solution of a given PDE. In this paper we consider the mathematical analysis of this framework for a larger class of problems in an abstract setting. In particular, we introduce a generalization of Eckart and Young theorem which allows to prove the convergence of the so-called progressive PGD for a large class of linear problems dened in tensor product Hilbert spaces
- …
