13 research outputs found
ASCR/HEP Exascale Requirements Review Report
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
The OpenMolcas Web: A Community-Driven Approach to Advancing Computational Chemistry
The developments of the open-source OpenMolcas chemistry software environment since spring 2020 are described, with a focus on novel functionalities accessible in the stable branch of the package or via interfaces with other packages. These developments span a wide range of topics in computational chemistry and are presented in thematic sections: electronic structure theory, electronic spectroscopy simulations, analytic gradients and molecular structure optimizations, ab initio molecular dynamics, and other new features. This report offers an overview of the chemical phenomena and processes OpenMolcas can address, while showing that OpenMolcas is an attractive platform for state-of-the-art atomistic computer simulations
Comparison of Computational Strategies for the Calculation of the Electronic Coupling in Intermolecular Energy and Electron Transport Processes
Electronic couplings in intermolecular electron and energy transfer processes calculated by six different existing computational techniques are compared to nonorthogonal configuration interaction for fragments (NOCI-F) results. The paper addresses the calculation of the electronic coupling in diketopyrrolopyrol, tetracene, 5,5′-difluoroindigo, and benzene-Cl for hole and electron transport, as well as the local exciton and singlet fission coupling. NOCI-F provides a rigorous computational scheme to calculate these couplings, but its computational cost is rather elevated. The here-considered ab initio Frenkel-Davydov (AIFD), Dimer projection (DIPRO), transition dipole moment coupling, Michl-Smith, effective Hamiltonian, and Mulliken-Hush approaches are computationally less demanding, and the comparison with the NOCI-F results shows that the NOCI-F results in the couplings for hole and electron transport are rather accurately predicted by the more approximate schemes but that the NOCI-F exciton transfer and singlet fission couplings are more difficult to reproduce.</p
Molecular and Dissociative Adsorption of Water on (TiO<sub>2</sub>)<sub><i>n</i></sub> Clusters, <i>n</i> = 1–4
The low energy structures of the
(TiO<sub>2</sub>)<sub><i>n</i></sub>(H<sub>2</sub>O)<sub><i>m</i></sub> (<i>n</i> ≤ 4, <i>m</i> ≤ 2<i>n</i>) and (TiO<sub>2</sub>)<sub>8</sub>(H<sub>2</sub>O)<sub><i>m</i></sub> (<i>m</i> = 3, 7, 8) clusters were predicted using
a global geometry optimization approach, with a number of new lowest
energy isomers being found. Water can molecularly or dissociatively
adsorb on pure and hydrated TiO<sub>2</sub> clusters. Dissociative
adsorption is the dominant reaction for the first two H<sub>2</sub>O adsorption reactions for <i>n</i> = 1, 2, and 4, for
the first three H<sub>2</sub>O adsorption reactions for <i>n</i> = 3, and for the first four H<sub>2</sub>O adsorption reactions
for <i>n</i> = 8. As more H<sub>2</sub>O’s are added
to the hydrated (TiO<sub>2</sub>)<sub><i>n</i></sub> cluster,
dissociative adsorption becomes less exothermic as all the Ti centers
become 4-coordinate. Two types of bonds can be formed between the
molecularly adsorbed water and TiO<sub>2</sub> clusters: a Lewis acid–base
Ti–O(H<sub>2</sub>) bond or an O···H hydrogen
bond. The coupled cluster CCSD(T) results show that at 0 K the H<sub>2</sub>O adsorption energy at a 4-coordinate Ti center is ∼15
kcal/mol for the Lewis acid–base molecular adsorption and ∼7
kcal/mol for the H-bond molecular adsorption, in comparison to that
of 8–10 kcal/mol for the dissociative adsorption. The cluster
size and geometry independent dehydration reaction energy, <i>E</i><sub>D</sub>, for the general reaction 2(−TiOH)
→ −TiOTi– + H<sub>2</sub>O at 4-coordinate Ti
centers was estimated from the aggregation reaction of <i>n</i>Ti(OH)<sub>4</sub> to form the monocyclic ring cluster (TiO<sub>3</sub>H<sub>2</sub>)<sub><i>n</i></sub> + <i>n</i>H<sub>2</sub>O. <i>E</i><sub>D</sub> is estimated to be −8
kcal/mol, showing that intramolecular and intermolecular dehydration
reactions are intrinsically thermodynamically allowed for the hydrated
(TiO<sub>2</sub>)<sub><i>n</i></sub> clusters with all of
the Ti centers 4-coordinate, which can be hindered by cluster geometry
changes caused by such processes. Bending force constants for the
TiOTi and OTiO bonds are determined to be 7.4 and 56.0 kcal/(mol·rad<sup>2</sup>). Infrared vibrational spectra were calculated using density
functional theory, and the new bands appearing upon water adsorption
were assigned
Developments in computer architecture and the birth and growth of computational chemistry
It goes almost without saying that the impressive development of computational chemistry in the past 50-60 years is a direct consequence of the even more impressive developments in computer hardware and software in that same period. However, this development also required the vision and skills of pioneering scientists that saw the new possibilities early, such as Roothaan and students at the University of Chicago, Slater’s group at MIT, and the Boys group at Cambridge, United Kingdom. Yet, we might recall in this context that it was a chemical physics problem, the calculation of the dielectric constant of Helium, that inspired John Atanasoff, working on his PhD thesis in Madison, Wisconsin in 1930, to think about designing an electronic calculating machine. About 10 years later, at Ames, Iowa, with the help of his student Clifford Berry, he succeeded in building the first electronic computer-the ABC, the Atanasoff-Berry computer. Because of war circumstances, this invention was never patented and only in October 1973 it was legally settled that the ABC and not the ENIAC was the first electronic computer ever built. A fairly complete account of this interesting episode in computer history can be found on the site www.columbia.edu/∼td2177/JVAtanasoff/JVAtanasoff.html
Recommended from our members
Bringing large-scale multiple genome analysis one step closer: ScalaBLAST and beyond
Genome sequence comparisons of exponentially growing data sets form the foundation for the comparative analysis tools provided by community biological data resources such as the Integrated Microbial Genome (IMG) system at the Joint Genome Institute (JGI). We present an example of how ScalaBLAST, a high-throughput sequence analysis program harnesses increasingly critical high-performance computing to perform sequence analysis which is a critical component of maintaining a state-of-the-art sequence data repository. The Integrated Microbial Genomes (IMG) system1 is a data management and analysis platform for microbial genomes hosted at the JGI. IMG contains both draft and complete JGI genomes integrated with other publicly available microbial genomes of all three domains of life. IMG provides tools and viewers for interactive analysis of genomes, genes and functions, individually or in a comparative context. Most of these tools are based on pre-computed pairwise sequence similarities involving millions of genes. These computations are becoming prohibitively time consuming with the rapid increase in the number of newly sequenced genomes incorporated into IMG and the need to refresh regularly the content of IMG in order to reflect changes in the annotations of existing genomes. Thus, building IMG 2.0 (released on December 1st 2006) entailed reloading from NCBI's RefSeq all the genomes in the previous version of IMG (IMG 1.6, as of September 1st, 2006) together with 1,541 new public microbial,viral and eukaryal genomes, bringing the total of IMG genomes to 2,301. A critical part of building IMG 2.0 involved using PNNL ScalaBLAST software for computing pairwise similarities for over 2.2 million genes in under 26 hours on 1,000 processors, thus illustrating the impact that new generation bioinformatics tools are poised to make in biology. The BLAST algorithm2, 3 is a familiar bioinformatics application for computing sequence similarity, and has become a workhorse in large-scale genomics projects. The rapid growth of genome resources such as IMG cannot be sustained without more powerful tools such as ScalaBLAST that use more effectively large scale computing resources to perform the core BLAST calculations. ScalaBLAST is a high performance computing algorithm designed to give high throughput BLAST results on high-end supercomputers. Other parallel sequence comparison applications have been developed4-6. However problems with scaling generally prevent these applications from being used for very large searches. ScalaBLAST7 is the first BLAST application to be both highly scaleable against the size of the database as well as the number of computer processors on high-end hardware and on commodity clusters. ScalaBLAST achieves high throughput by parsing a large collection of query sequences into independent subgroups. These smaller tasks are assigned to independent process groups. Efficient scaling is achieved by (transparently to the user) sharing only one copy of the target database across all processors using the Global Array toolkit 8, 9, which provides software implementation of shared memory interface. ScalaBLAST was initially deployed on the 1,960 processor MPP2 cluster in the Wiliam R. Wiley Environmental Molecular Sciences Laboratory at Pacific Northwest National Laboratory, and has since been ported to a variety of linux-based clusters and shared memory architectures, including SGI Altix, AMD opteron, and Intel Xeon-based clusters. Future targets include IBM BlueGene, Cray, and SGI Altix XE architectures. The importance of performing high-throughput calculations rapidly lies in the rate of growth of sequence data. For a genome sequencing center to provide multiple-genome comparison capabilities, it must keep pace with exponentially growing collection of protein data, both from its own genomes, and from the public genome information as well. As sequence data continues to grow exponentially, this challenge will only increase with time. Solving the BLAST throughput challenge for centralized data resources like IMG has the potential to unlock the power of emerging analysis methods which, until recently, were limited by the availability of multiple genome comparison data. Fig. 1 illustrates how the run-time achieved by efficient scaling in ScalaBLAST enabled the IMG all vs. all BLAST calculations to complete in roughly 1 day. Note that to keep pace with growing IMG database, we will have to double the number of processors used in these calculations during the upcoming year. Grid-based solutions for improving throughput for BLAST searches has become a popular and attractive option for some centers. The Institute for Genome Research (http://www.tigr.org/), for instance, has implemented a grid-based BLAST tool allowing users to submit requests to be farmed out to available computers on an on-demand basis
Comparison of Computational Strategies for the Calculation of the Electronic Coupling in Intermolecular Energy and Electron Transport Processes
Electronic couplings
in intermolecular electron and energy transfer
processes calculated by six different existing computational techniques
are compared to nonorthogonal configuration interaction for fragments
(NOCI-F) results. The paper addresses the calculation of the electronic
coupling in diketopyrrolopyrol, tetracene, 5,5′-difluoroindigo,
and benzene–Cl for hole and electron transport, as well as
the local exciton and singlet fission coupling. NOCI-F provides a
rigorous computational scheme to calculate these couplings, but its
computational cost is rather elevated. The here-considered ab initio
Frenkel–Davydov (AIFD), Dimer projection (DIPRO), transition
dipole moment coupling, Michl–Smith, effective Hamiltonian,
and Mulliken–Hush approaches are computationally less demanding,
and the comparison with the NOCI-F results shows that the NOCI-F results
in the couplings for hole and electron transport are rather accurately
predicted by the more approximate schemes but that the NOCI-F exciton
transfer and singlet fission couplings are more difficult to reproduce