104 research outputs found

    Stochastic methods for solving high-dimensional partial differential equations

    Full text link
    We propose algorithms for solving high-dimensional Partial Differential Equations (PDEs) that combine a probabilistic interpretation of PDEs, through Feynman-Kac representation, with sparse interpolation. Monte-Carlo methods and time-integration schemes are used to estimate pointwise evaluations of the solution of a PDE. We use a sequential control variates algorithm, where control variates are constructed based on successive approximations of the solution of the PDE. Two different algorithms are proposed, combining in different ways the sequential control variates algorithm and adaptive sparse interpolation. Numerical examples will illustrate the behavior of these algorithms

    A Direct Elliptic Solver Based on Hierarchically Low-rank Schur Complements

    Full text link
    A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N)O(N \log^2 N) arithmetic complexity and O(NlogN)O(N \log N) memory footprint. We provide a baseline for performance and applicability by comparing with well known implementations of the H\mathcal{H}-LU factorization and algebraic multigrid with a parallel implementation that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as H\mathcal{H}-LU and that it can tackle problems where algebraic multigrid fails to converge

    Tensor Product Approximation (DMRG) and Coupled Cluster method in Quantum Chemistry

    Full text link
    We present the Copupled Cluster (CC) method and the Density matrix Renormalization Grooup (DMRG) method in a unified way, from the perspective of recent developments in tensor product approximation. We present an introduction into recently developed hierarchical tensor representations, in particular tensor trains which are matrix product states in physics language. The discrete equations of full CI approximation applied to the electronic Schr\"odinger equation is casted into a tensorial framework in form of the second quantization. A further approximation is performed afterwards by tensor approximation within a hierarchical format or equivalently a tree tensor network. We establish the (differential) geometry of low rank hierarchical tensors and apply the Driac Frenkel principle to reduce the original high-dimensional problem to low dimensions. The DMRG algorithm is established as an optimization method in this format with alternating directional search. We briefly introduce the CC method and refer to our theoretical results. We compare this approach in the present discrete formulation with the CC method and its underlying exponential parametrization.Comment: 15 pages, 3 figure

    Low rank approximation of multidimensional data

    Get PDF
    In the last decades, numerical simulation has experienced tremendous improvements driven by massive growth of computing power. Exascale computing has been achieved this year and will allow solving ever more complex problems. But such large systems produce colossal amounts of data which leads to its own difficulties. Moreover, many engineering problems such as multiphysics or optimisation and control, require far more power that any computer architecture could achieve within the current scientific computing paradigm. In this chapter, we propose to shift the paradigm in order to break the curse of dimensionality by introducing decomposition to reduced data. We present an extended review of data reduction techniques and intends to bridge between applied mathematics community and the computational mechanics one. The chapter is organized into two parts. In the first one bivariate separation is studied, including discussions on the equivalence of proper orthogonal decomposition (POD, continuous framework) and singular value decomposition (SVD, discrete matrices). Then, in the second part, a wide review of tensor formats and their approximation is proposed. Such work has already been provided in the literature but either on separate papers or into a pure applied mathematics framework. Here, we offer to the data enthusiast scientist a description of Canonical, Tucker, Hierarchical and Tensor train formats including their approximation algorithms. When it is possible, a careful analysis of the link between continuous and discrete methods will be performed.IV Research and Transfer Plan of the University of SevillaInstitut CarnotJunta de AndalucíaIDEX program of the University of Bordeau

    On thin plate spline interpolation

    Full text link
    We present a simple, PDE-based proof of the result [M. Johnson, 2001] that the error estimates of [J. Duchon, 1978] for thin plate spline interpolation can be improved by h1/2h^{1/2}. We illustrate that H{\mathcal H}-matrix techniques can successfully be employed to solve very large thin plate spline interpolation problem

    Tensor completion in hierarchical tensor representations

    Full text link
    Compressed sensing extends from the recovery of sparse vectors from undersampled measurements via efficient algorithms to the recovery of matrices of low rank from incomplete information. Here we consider a further extension to the reconstruction of tensors of low multi-linear rank in recently introduced hierarchical tensor formats from a small number of measurements. Hierarchical tensors are a flexible generalization of the well-known Tucker representation, which have the advantage that the number of degrees of freedom of a low rank tensor does not scale exponentially with the order of the tensor. While corresponding tensor decompositions can be computed efficiently via successive applications of (matrix) singular value decompositions, some important properties of the singular value decomposition do not extend from the matrix to the tensor case. This results in major computational and theoretical difficulties in designing and analyzing algorithms for low rank tensor recovery. For instance, a canonical analogue of the tensor nuclear norm is NP-hard to compute in general, which is in stark contrast to the matrix case. In this book chapter we consider versions of iterative hard thresholding schemes adapted to hierarchical tensor formats. A variant builds on methods from Riemannian optimization and uses a retraction mapping from the tangent space of the manifold of low rank tensors back to this manifold. We provide first partial convergence results based on a tensor version of the restricted isometry property (TRIP) of the measurement map. Moreover, an estimate of the number of measurements is provided that ensures the TRIP of a given tensor rank with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral

    Approximating turbulent and non-turbulent events with the Tensor Train decomposition method

    Get PDF
    Low-rank multilevel approximation methods are often suited to attack high-dimensional problems successfully and they allow very compact representation of large data sets. Specifically, hierarchical tensor product decomposition methods, e.g., the Tree-Tucker format and the Tensor Train format emerge as a promising approach for application to data that are concerned with cascade-of-scales problems as, e.g., in turbulent fluid dynamics. Beyond multilinear mathematics, those tensor formats are also successfully applied in e.g., physics or chemistry, where they are used in many body problems and quantum states. Here, we focus on two particular objectives, that is, we aim at capturing self-similar structures that might be hidden in the data and we present the reconstruction capabilities of the Tensor Train decomposition method tested with 3D channel turbulence flow data
    corecore