302 research outputs found

    Distributed Hierarchical SVD in the Hierarchical Tucker Format

    Full text link
    We consider tensors in the Hierarchical Tucker format and suppose the tensor data to be distributed among several compute nodes. We assume the compute nodes to be in a one-to-one correspondence with the nodes of the Hierarchical Tucker format such that connected nodes can communicate with each other. An appropriate tree structure in the Hierarchical Tucker format then allows for the parallelization of basic arithmetic operations between tensors with a parallel runtime which grows like log(d)\log(d), where dd is the tensor dimension. We introduce parallel algorithms for several tensor operations, some of which can be applied to solve linear equations AX=B\mathcal{A}X=B directly in the Hierarchical Tucker format using iterative methods like conjugate gradients or multigrid. We present weak scaling studies, which provide evidence that the runtime of our algorithms indeed grows like log(d)\log(d). Furthermore, we present numerical experiments in which we apply our algorithms to solve a parameter-dependent diffusion equation in the Hierarchical Tucker format by means of a multigrid algorithm

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Tensor-Sparsity of Solutions to High-Dimensional Elliptic Partial Differential Equations

    Full text link
    A recurring theme in attempts to break the curse of dimensionality in the numerical approximations of solutions to high-dimensional partial differential equations (PDEs) is to employ some form of sparse tensor approximation. Unfortunately, there are only a few results that quantify the possible advantages of such an approach. This paper introduces a class Σn\Sigma_n of functions, which can be written as a sum of rank-one tensors using a total of at most nn parameters and then uses this notion of sparsity to prove a regularity theorem for certain high-dimensional elliptic PDEs. It is shown, among other results, that whenever the right-hand side ff of the elliptic PDE can be approximated with a certain rate O(nr)\mathcal{O}(n^{-r}) in the norm of H1{\mathrm H}^{-1} by elements of Σn\Sigma_n, then the solution uu can be approximated in H1{\mathrm H}^1 from Σn\Sigma_n to accuracy O(nr)\mathcal{O}(n^{-r'}) for any r(0,r)r'\in (0,r). Since these results require knowledge of the eigenbasis of the elliptic operator considered, we propose a second "basis-free" model of tensor sparsity and prove a regularity theorem for this second sparsity model as well. We then proceed to address the important question of the extent such regularity theorems translate into results on computational complexity. It is shown how this second model can be used to derive computational algorithms with performance that breaks the curse of dimensionality on certain model high-dimensional elliptic PDEs with tensor-sparse data.Comment: 41 pages, 1 figur

    Theorie und Anwendungen Hierarchischer Matrizen

    Get PDF
    The modeling of physical properties often leads to the task of solving partial differential equations or integral equations. The results of some discretisation and linearisation process are matrix equations or linear systems of equations with special features. In the case of partial differential equations one exploits the local character of the differentiation by using some finite element method or finite difference scheme and gains a sparse system matrix. In the case of (nonlocal) integral operators low rank approximations seem to be the method of choice. These are either given explicitly by some multipole method or panel clustering technique or implicitly by rank revealing decompositions. Both types of matrices can be represented as so-called H-matrices. In this thesis we investigate algorithms that perform the addition, multiplication and inversion of H-matrices approximately. Under moderate assumptions the complexity of these new arithmetics is almost linear (linear up to logarithmic terms of order 1 to 3). The arithmetic operations can be performed adaptively, that is up to some given accuracy epsilon the relative error of the operations is zero. The question arises under which circumstances the inverse of an H-matrix can be approximated by an H-matrix. For the techniques used in this thesis we need very restrictive assumptions, but the numerical examples in the last part indicate that the approximability does not depend on these assumptions

    Low rank surrogates for polymorphic fields with application to fuzzy-stochastic partial differential equations

    Get PDF
    We consider a general form of fuzzy-stochastic PDEs depending on the interaction of probabilistic and non-probabilistic ("possibilistic") influences. Such a combined modelling of aleatoric and epistemic uncertainties for instance can be applied beneficially in an engineering context for real-world applications, where probabilistic modelling and expert knowledge has to be accounted for. We examine existence and well-definedness of polymorphic PDEs in appropriate function spaces. The fuzzy-stochastic dependence is described in a high-dimensional parameter space, thus easily leading to an exponential complexity in practical computations. To aleviate this severe obstacle in practise, a compressed low-rank approximation of the problem formulation and the solution is derived. This is based on the Hierarchical Tucker format which is constructed with solution samples by a non-intrusive tensor reconstruction algorithm. The performance of the proposed model order reduction approach is demonstrated with two examples. One of these is the ubiquitous groundwater flow model with Karhunen-Loeve coefficient field which is generalized by a fuzzy correlation length

    Rank Bounds for Approximating Gaussian Densities in the Tensor-Train Format

    Get PDF
    Low-rank tensor approximations have shown great potential for uncertainty quantification in high dimensions, for example, to build surrogate models that can be used to speed up large-scale inference problems [M. Eigel, M. Marschall, and R. Schneider, Inverse Problems, 34 (2018), 035010; S. Dolgov et al., Stat. Comput., 30 (2020), pp. 603–625]. The feasibility and efficiency of such approaches depends critically on the rank that is necessary to represent or approximate the underlying distribution. In this paper, a priori rank bounds for approximations in the functional Tensor-Train representation for the case of Gaussian models are developed. It is shown that under suitable conditions on the precision matrix, the Gaussian density can be approximated to high accuracy without suffering from an exponential growth of complexity as the dimension increases. These results provide a rigorous justification of the suitability and the limitations of low-rank tensor methods in a simple but important model case. Numerical experiments confirm that the rank bounds capture the qualitative behavior of the rank structure when varying the parameters of the precision matrix and the accuracy of the approximation. Finally, the practical relevance of the theoretical results is demonstrated in the context of a Bayesian filtering problem
    corecore