155 research outputs found

    Structure/Function Studies of Proteins Using Linear Scaling Quantum Mechanical Methodologies

    Full text link

    Exploration of Reaction Pathways and Chemical Transformation Networks

    Full text link
    For the investigation of chemical reaction networks, the identification of all relevant intermediates and elementary reactions is mandatory. Many algorithmic approaches exist that perform explorations efficiently and automatedly. These approaches differ in their application range, the level of completeness of the exploration, as well as the amount of heuristics and human intervention required. Here, we describe and compare the different approaches based on these criteria. Future directions leveraging the strengths of chemical heuristics, human interaction, and physical rigor are discussed.Comment: 48 pages, 4 figure

    A Sparse SCF algorithm and its parallel implementation: Application to DFTB

    Full text link
    We present an algorithm and its parallel implementation for solving a self consistent problem as encountered in Hartree Fock or Density Functional Theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density functional based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines (ii) calculations involving intermediate size systems (1~000--100~000 atoms) are also strongly accelerated and can run efficiently on standard servers (iii) the error on the total energy due to the use of a cut-off in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion.Comment: 13 pages, 11 figure

    Diagrammatic Coupled Cluster Monte Carlo

    Get PDF
    We propose a modified coupled cluster Monte Carlo algorithm that stochastically samples connected terms within the truncated Baker--Campbell--Hausdorff expansion of the similarity transformed Hamiltonian by construction of coupled cluster diagrams on the fly. Our new approach -- diagCCMC -- allows propagation to be performed using only the connected components of the similarity-transformed Hamiltonian, greatly reducing the memory cost associated with the stochastic solution of the coupled cluster equations. We show that for perfectly local, noninteracting systems, diagCCMC is able to represent the coupled cluster wavefunction with a memory cost that scales linearly with system size. The favorable memory cost is observed with the only assumption of fixed stochastic granularity and is valid for arbitrary levels of coupled cluster theory. Significant reduction in memory cost is also shown to smoothly appear with dissociation of a finite chain of helium atoms. This approach is also shown not to break down in the presence of strong correlation through the example of a stretched nitrogen molecule. Our novel methodology moves the theoretical basis of coupled cluster Monte Carlo closer to deterministic approaches.Comment: 31 pages, 6 figure

    Molecular propensity as a driver for explorative reactivity studies

    Full text link
    Quantum chemical studies of reactivity involve calculations on a large number of molecular structures and comparison of their energies. Already the set-up of these calculations limits the scope of the results that one will obtain, because several system-specific variables such as the charge and spin need to be set prior to the calculation. For a reliable exploration of reaction mechanisms, a considerable number of calculations with varying global parameters must be taken into account, or important facts about the reactivity of the system under consideration can go undetected. For example, one could miss crossings of potential energy surfaces for different spin states or might not note that a molecule is prone to oxidation. Here, we introduce the concept of molecular propensity to account for the predisposition of a molecular system to react across different electronic states in certain nuclear configurations. Within our real-time quantum chemistry framework, we developed an algorithm that allows us to be alerted to such a propensity of a system under consideration.Comment: 10 pages, 7 figure

    Speed and accuracy: Having your cake and eating it too

    Get PDF
    Since the first ab initio methods were developed, the ultimate goal of quantum chemistry has been to provide insights, not readily accessible through experiment, into chemical phenomena. Over the years, two different paths to this end have been taken. The first path provides as accurate a description of relatively small systems as modern computer hardware will allow. The second path follows the desire to perform simulations on systems of physically relevant sizes while sacrificing a certain level of accuracy. The merging of these two paths has allowed for the accurate modeling of large molecular systems through the use of novel theoretical methods. The largest barrier to achieving the goal of accurate calculations on large systems has been the computational requirements of many modern theoretical methods. While these methods are capable of providing the desired level of accuracy, the prohibitive computational requirements can limit system sizes to tens of atoms. By decomposing large chemical systems into more computationally tractable pieces, fragmentation methods have the capability to reduce this barrier and allow for highly accurate descriptions of large molecular systems such as proteins, bulk phase solutions and polymers and nano-scale systems

    Fragmentation Methods: A Route to Accurate Calculations on Large Systems

    Get PDF
    Theoretical chemists have always strived to perform quantum mechanics (QM) calculations on larger and larger molecules and molecular systems, as well as condensed phase species, that are frequently much larger than the current state-of-the-art would suggest is possible. The desire to study species (with acceptable accuracy) that are larger than appears to be feasible has naturally led to the development of novel methods, including semiempirical approaches, reduced scaling methods, and fragmentation methods. The focus of the present review is on fragmentation methods, in which a large molecule or molecular system is made more computationally tractable by explicitly considering only one part (fragment) of the whole in any particular calculation. If one can divide a species of interest into fragments, employ some level of ab initio QM to calculate the wave function, energy, and properties of each fragment, and then combine the results from the fragment calculations to predict the same properties for the whole, the possibility exists that the accuracy of the outcome can approach that which would be obtained from a full (nonfragmented) calculation. It is this goal that drives the development of fragmentation methods
    • …
    corecore