922 research outputs found

    Block Circulant and Toeplitz Structures in the Linearized Hartree–Fock Equation on Finite Lattices: Tensor Approach

    Get PDF
    This paper introduces and analyses the new grid-based tensor approach to approximate solution of the elliptic eigenvalue problem for the 3D lattice-structured systems. We consider the linearized Hartree-Fock equation over a spatial L1Ă—L2Ă—L3L_1\times L_2\times L_3 lattice for both periodic and non-periodic problem setting, discretized in the localized Gaussian-type orbitals basis. In the periodic case, the Galerkin system matrix obeys a three-level block-circulant structure that allows the FFT-based diagonalization, while for the finite extended systems in a box (Dirichlet boundary conditions) we arrive at the perturbed block-Toeplitz representation providing fast matrix-vector multiplication and low storage size. The proposed grid-based tensor techniques manifest the twofold benefits: (a) the entries of the Fock matrix are computed by 1D operations using low-rank tensors represented on a 3D grid, (b) in the periodic case the low-rank tensor structure in the diagonal blocks of the Fock matrix in the Fourier space reduces the conventional 3D FFT to the product of 1D FFTs. Lattice type systems in a box with Dirichlet boundary conditions are treated numerically by our previous tensor solver for single molecules, which makes possible calculations on rather large L1Ă—L2Ă—L3L_1\times L_2\times L_3 lattices due to reduced numerical cost for 3D problems. The numerical simulations for both box-type and periodic LĂ—1Ă—1L\times 1\times 1 lattice chain in a 3D rectangular "tube" with LL up to several hundred confirm the theoretical complexity bounds for the block-structured eigenvalue solvers in the limit of large LL.Comment: 30 pages, 12 figures. arXiv admin note: substantial text overlap with arXiv:1408.383

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format

    Full text link
    We apply the Tensor Train (TT) decomposition to construct the tensor product Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some quantities of interest (mean, variance, exceedance probabilities). We assume that the random diffusion coefficient is given as a smooth transformation of a Gaussian random field. In this case, the PCE is delivered by a complicated formula, which lacks an analytic TT representation. To construct its TT approximation numerically, we develop the new block TT cross algorithm, a method that computes the whole TT decomposition from a few evaluations of the PCE formula. The new method is conceptually similar to the adaptive cross approximation in the TT format, but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its post-processing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial degree is bounded for each random variable independently. This provides higher accuracy than the sparse polynomial set or the Monte Carlo method, but the cardinality of the tensor product set grows exponentially with the number of random variables. However, when the PCE coefficients are implicitly approximated in the TT format, the computations with the full tensor product polynomial set become possible. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with significantly extended numerical experiments. Some unused material is remove

    Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations

    Get PDF
    Stochastic Galerkin methods for non-affine coefficient representations are known to cause major difficulties from theoretical and numerical points of view. In this work, an adaptive Galerkin FE method for linear parametric PDEs with lognormal coefficients discretized in Hermite chaos polynomials is derived. It employs problem-adapted function spaces to ensure solvability of the variational formulation. The inherently high computational complexity of the parametric operator is made tractable by using hierarchical tensor representations. For this, a new tensor train format of the lognormal coefficient is derived and verified numerically. The central novelty is the derivation of a reliable residual-based a posteriori error estimator. This can be regarded as a unique feature of stochastic Galerkin methods. It allows for an adaptive algorithm to steer the refinements of the physical mesh and the anisotropic Wiener chaos polynomial degrees. For the evaluation of the error estimator to become feasible, a numerically efficient tensor format discretization is developed. Benchmark examples with unbounded lognormal coefficient fields illustrate the performance of the proposed Galerkin discretization and the fully adaptive algorithm

    Greedy algorithms for high-dimensional eigenvalue problems

    Full text link
    In this article, we present two new greedy algorithms for the computation of the lowest eigenvalue (and an associated eigenvector) of a high-dimensional eigenvalue problem, and prove some convergence results for these algorithms and their orthogonalized versions. The performance of our algorithms is illustrated on numerical test cases (including the computation of the buckling modes of a microstructured plate), and compared with that of another greedy algorithm for eigenvalue problems introduced by Ammar and Chinesta.Comment: 33 pages, 5 figure

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure
    • …
    corecore