2,750 research outputs found

    Block Circulant and Toeplitz Structures in the Linearized Hartree–Fock Equation on Finite Lattices: Tensor Approach

    Get PDF
    This paper introduces and analyses the new grid-based tensor approach to approximate solution of the elliptic eigenvalue problem for the 3D lattice-structured systems. We consider the linearized Hartree-Fock equation over a spatial L1×L2×L3L_1\times L_2\times L_3 lattice for both periodic and non-periodic problem setting, discretized in the localized Gaussian-type orbitals basis. In the periodic case, the Galerkin system matrix obeys a three-level block-circulant structure that allows the FFT-based diagonalization, while for the finite extended systems in a box (Dirichlet boundary conditions) we arrive at the perturbed block-Toeplitz representation providing fast matrix-vector multiplication and low storage size. The proposed grid-based tensor techniques manifest the twofold benefits: (a) the entries of the Fock matrix are computed by 1D operations using low-rank tensors represented on a 3D grid, (b) in the periodic case the low-rank tensor structure in the diagonal blocks of the Fock matrix in the Fourier space reduces the conventional 3D FFT to the product of 1D FFTs. Lattice type systems in a box with Dirichlet boundary conditions are treated numerically by our previous tensor solver for single molecules, which makes possible calculations on rather large L1×L2×L3L_1\times L_2\times L_3 lattices due to reduced numerical cost for 3D problems. The numerical simulations for both box-type and periodic L×1×1L\times 1\times 1 lattice chain in a 3D rectangular "tube" with LL up to several hundred confirm the theoretical complexity bounds for the block-structured eigenvalue solvers in the limit of large LL.Comment: 30 pages, 12 figures. arXiv admin note: substantial text overlap with arXiv:1408.383

    Centrosymmetric Matrices in the Sinc Collocation Method for Sturm-Liouville Problems

    Full text link
    Recently, we used the Sinc collocation method with the double exponential transformation to compute eigenvalues for singular Sturm-Liouville problems. In this work, we show that the computation complexity of the eigenvalues of such a differential eigenvalue problem can be considerably reduced when its operator commutes with the parity operator. In this case, the matrices resulting from the Sinc collocation method are centrosymmetric. Utilizing well known properties of centrosymmetric matrices, we transform the problem of solving one large eigensystem into solving two smaller eigensystems. We show that only 1/(N+1) of all components need to be computed and stored in order to obtain all eigenvalues, where (2N+1) corresponds to the dimension of the eigensystem. We applied our result to the Schr\"odinger equation with the anharmonic potential and the numerical results section clearly illustrates the substantial gain in efficiency and accuracy when using the proposed algorithm.Comment: 11 pages, 4 figure

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    On the Distributions of the Lengths of the Longest Monotone Subsequences in Random Words

    Full text link
    We consider the distributions of the lengths of the longest weakly increasing and strongly decreasing subsequences in words of length N from an alphabet of k letters. We find Toeplitz determinant representations for the exponential generating functions (on N) of these distribution functions and show that they are expressible in terms of solutions of Painlev\'e V equations. We show further that in the weakly increasing case the generating function gives the distribution of the smallest eigenvalue in the k x k Laguerre random matrix ensemble and that the distribution itself has, after centering and normalizing, an N -> infinity limit which is equal to the distribution function for the largest eigenvalue in the Gaussian Unitary Ensemble of k x k hermitian matrices of trace zero.Comment: 30 pages, revised version corrects an error in the statement of Theorem
    • …
    corecore