63 research outputs found

    Highly Accurate Random Phase Approximation Methods With Linear Time Complexity

    Get PDF
    One of the key challenges of electronic structure theory is to find formulations to compute electronic ground-state energies with high accuracy while being applicable to a wide range of chemical problems. For systems beyond the few atom scale often computations achieving higher accuracies than the so called double-hybrid density functional approximations become prohibitively expensive. Here, the random phase approximation, which is known to yield such higher accuracy results has been developed from a theory applicable only to molecules on the tens of atoms scale into a highly accurate and widely applicable theory. To this end, a mathematical understanding has been developed that, without changing the computational complexity, allows to eliminate the error introduced by the resolution-of-the-identity approximation which had been introduced in the previous formulation. Furthermore, in this work a new formulation of the random phase approximation for molecules has been presented which achieves linear-scaling of compute time with molecular size - thereby expanding the realm of molecules that can be treated on this level of theory to up to a thousand atoms on a simple desktop computer. Finally, the theory has been matured to allow for use of even extensive basis sets without drastically increasing runtimes. Overall, the presented theory is at least as accurate and even faster than the original formulation for all molecules for which compute time is significant and opens new possibilities for the highly accurate description of large quantum chemical systems

    Generalized Additive Models for Gigadata:Modeling the U.K. Black Smoke Network Daily Data

    Get PDF
    <p>We develop scalable methods for fitting penalized regression spline based generalized additive models with of the order of 10<sup>4</sup> coefficients to up to 10<sup>8</sup> data. Computational feasibility rests on: (i) a new iteration scheme for estimation of model coefficients and smoothing parameters, avoiding poorly scaling matrix operations; (ii) parallelization of the iteration’s pivoted block Cholesky and basic matrix operations; (iii) the marginal discretization of model covariates to reduce memory footprint, with efficient scalable methods for computing required crossproducts directly from the discrete representation. Marginal discretization enables much finer discretization than joint discretization would permit. We were motivated by the need to model four decades worth of daily particulate data from the U.K. Black Smoke and Sulphur Dioxide Monitoring Network. Although reduced in size recently, over 2000 stations have at some time been part of the network, resulting in some 10 million measurements. Modeling at a daily scale is desirable for accurate trend estimation and mapping, and to provide daily exposure estimates for epidemiological cohort studies. Because of the dataset size, previous work has focused on modeling time or space averaged pollution levels, but this is unsatisfactory from a health perspective, since it is often acute exposure locally and on the time scale of days that is of most importance in driving adverse health outcomes. If computed by conventional means our black smoke model would require a half terabyte of storage just for the model matrix, whereas we are able to compute with it on a desktop workstation. The best previously available reduced memory footprint method would have required three orders of magnitude more computing time than our new method. Supplementary materials for this article are available online.</p
    • …
    corecore