141 research outputs found

    Optimizing the energy with quantum Monte Carlo: A lower numerical scaling for Jastrow-Slater expansions

    Get PDF
    We present an improved formalism for quantum Monte Carlo calculations of energy derivatives and properties (e.g. the interatomic forces), with a multideterminant Jastrow-Slater function. As a function of the number NeN_e of Slater determinants, the numerical scaling of O(Ne)O(N_e) per derivative we have recently reported is here lowered to O(Ne)O(N_e) for the entire set of derivatives. As a function of the number of electrons NN, the scaling to optimize the wave function and the geometry of a molecular system is lowered to O(N3)+O(NNe)O(N^3)+O(N N_e), the same as computing the energy alone in the sampling process. The scaling is demonstrated on linear polyenes up to C60_{60}H62_{62} and the efficiency of the method is illustrated with the structural optimization of butadiene and octatetraene with Jastrow-Slater wave functions comprising as many as 200000 determinants and 60000 parameters

    The Sherman–Morrison–Woodbury formula for generalized linear matrix equations and applications

    Get PDF
    We discuss the use of a matrix-oriented approach for numerically solving the dense matrix equation AX + XAT + M1XN1 + 
 + MℓXNℓ = F, with ℓ ≄ 1, and Mi, Ni, i = 1, 
, ℓ of low rank. The approach relies on the Sherman–Morrison–Woodbury formula formally defined in the vectorized form of the problem, but applied in the matrix setting. This allows one to solve medium size dense problems with computational costs and memory requirements dramatically lower than with a Kronecker formulation. Application problems leading to medium size equations of this form are illustrated and the performance of the matrix-oriented method is reported. The application of the procedure as the core step in the solution of the large-scale problem is also shown. In addition, a new explicit method for linear tensor equations is proposed, that uses the discussed matrix equation procedure as a key building block

    Split representation of adaptively compressed polarizability operator

    Full text link
    The polarizability operator plays a central role in density functional perturbation theory and other perturbative treatment of first principle electronic structure theories. The cost of computing the polarizability operator generally scales as O(Ne4)\mathcal{O}(N_{e}^4) where NeN_e is the number of electrons in the system. The recently developed adaptively compressed polarizability operator (ACP) formulation [L. Lin, Z. Xu and L. Ying, Multiscale Model. Simul. 2017] reduces such complexity to O(Ne3)\mathcal{O}(N_{e}^3) in the context of phonon calculations with a large basis set for the first time, and demonstrates its effectiveness for model problems. In this paper, we improve the performance of the ACP formulation by splitting the polarizability into a near singular component that is statically compressed, and a smooth component that is adaptively compressed. The new split representation maintains the O(Ne3)\mathcal{O}(N_e^3) complexity, and accelerates nearly all components of the ACP formulation, including Chebyshev interpolation of energy levels, iterative solution of Sternheimer equations, and convergence of the Dyson equations. For simulation of real materials, we discuss how to incorporate nonlocal pseudopotentials and finite temperature effects. We demonstrate the effectiveness of our method using one-dimensional model problem in insulating and metallic regimes, as well as its accuracy for real molecules and solids.Comment: 32 pages, 8 figures. arXiv admin note: text overlap with arXiv:1605.0802

    On the conformity of strong, regularized, embedded and smeared discontinuity approaches for the modeling of localized failure in solids

    Get PDF
    Once strain localization occurs in softening solids, inelastic loading behavior is restricted within a narrow band while the bulk unloads elastically. Accordingly, localized failure in solids can be approached by embedding or smearing a traction-based inelastic discontinuity (band) within an (equivalent) elastic matrix along a specific orientation. In this context, the conformity of the strong/regularized and embedded/smeared discontinuity approaches are investigated, regarding the strategies dealing with the kinematics and statics. On one hand, the traction continuity condition imposed in weak form results in the strong and regularized discontinuity approaches, with respect to the approximation of displacement and strain discontinuities. In addition to the elastic bulk, consistent plastic-damage cohesive models for the discontinuities are established. The conformity between the strong discontinuity approach and its regularized counterpart is shown through the fracture energy analysis. On the other hand, the traction continuity condition can also be enforced point-wisely in strong form so that the standard principle of virtual work applies. In this case, the static constraint resulting from traction continuity can be used to eliminate the kinematic variable associated with the discontinuity (band) at the material level. This strategy leads to embedded and smeared discontinuity models for the overall weakened solid which can also be cast into the elastoplastic degradation framework with a different kinematic decomposition. Being equivalent to the kinematic constraint guaranteeing stress continuity upon strain localization, Mohr’s maximization postulate is adopted for the determination of the discontinuity orientation. Closed-form results are presented in plane stress conditions, with the classical Rankine, Mohr–Coulomb, von Mises and Drucker–Prager criteria as illustrative examples. The orientation of the discontinuity (band) and the stress-based failure criteria consistent with the given traction-based counterparts are derived. Finally, a generic failure criterion of either elliptic, parabolic or hyperbolic type, appropriate for the modeling of mixed-mode failure, is analyzed in a unified manner. Furthermore, a novel method is proposed to calibrate the involved mesoscopic parameters from available macroscopic test data, which is then validated against Willam’s numerical test

    Computing the density of states for optical spectra by low-rank and QTT tensor approximation

    Get PDF
    In this paper, we introduce a new interpolation scheme to approximate the density of states (DOS) for a class of rank-structured matrices with application to the Tamm-Dancoff approximation (TDA) of the Bethe-Salpeter equation (BSE). The presented approach for approximating the DOS is based on two main techniques. First, we propose an economical method for calculating the traces of parametric matrix resolvents at interpolation points by taking advantage of the block-diagonal plus low-rank matrix structure described in [6, 3] for the BSE/TDA problem. Second, we show that a regularized or smoothed DOS discretized on a fine grid of size NN can be accurately represented by a low rank quantized tensor train (QTT) tensor that can be determined through a least squares fitting procedure. The latter provides good approximation properties for strictly oscillating DOS functions with multiple gaps, and requires asymptotically much fewer (O(log⁥N)O(\log N)) functional calls compared with the full grid size NN. This approach allows us to overcome the computational difficulties of the traditional schemes by avoiding both the need of stochastic sampling and interpolation by problem independent functions like polynomials etc. Numerical tests indicate that the QTT approach yields accurate recovery of DOS associated with problems that contain relatively large spectral gaps. The QTT tensor rank only weakly depends on the size of a molecular system which paves the way for treating large-scale spectral problems.Comment: 26 pages, 25 figure

    Learning Models over Relational Data using Sparse Tensors and Functional Dependencies

    Full text link
    Integrated solutions for analytics over relational databases are of great practical importance as they avoid the costly repeated loop data scientists have to deal with on a daily basis: select features from data residing in relational databases using feature extraction queries involving joins, projections, and aggregations; export the training dataset defined by such queries; convert this dataset into the format of an external learning tool; and train the desired model using this tool. These integrated solutions are also a fertile ground of theoretically fundamental and challenging problems at the intersection of relational and statistical data models. This article introduces a unified framework for training and evaluating a class of statistical learning models over relational databases. This class includes ridge linear regression, polynomial regression, factorization machines, and principal component analysis. We show that, by synergizing key tools from database theory such as schema information, query structure, functional dependencies, recent advances in query evaluation algorithms, and from linear algebra such as tensor and matrix operations, one can formulate relational analytics problems and design efficient (query and data) structure-aware algorithms to solve them. This theoretical development informed the design and implementation of the AC/DC system for structure-aware learning. We benchmark the performance of AC/DC against R, MADlib, libFM, and TensorFlow. For typical retail forecasting and advertisement planning applications, AC/DC can learn polynomial regression models and factorization machines with at least the same accuracy as its competitors and up to three orders of magnitude faster than its competitors whenever they do not run out of memory, exceed 24-hour timeout, or encounter internal design limitations.Comment: 61 pages, 9 figures, 2 table
    • 

    corecore