35 research outputs found
Locality Error Free Effective Core Potentials for 3d Transition Metal Elements Developed for the Diffusion Monte Carlo Method
Pseudopotential locality errors have hampered the applications of the
diffusion Monte Carlo (DMC) method in materials containing transition metals,
in particular oxides. We have developed locality error free effective core
potentials, pseudo-Hamiltonians, for transition metals ranging from Cr to Zn.
We have modified a procedure published by some of us in [M.C. Bennett et al,
JCTC 18 (2022)]. We carefully optimized our pseudo-Hamiltonians and achieved
transferability errors comparable to the best semilocal pseudopotentials used
with DMC but without incurring in locality errors. Our pseudo-Hamiltonian set
(named OPH23) bears the potential to significantly improve the accuracy of
many-body-first-principles calculations in fundamental science research of
complex materials involving transition metals
QMCPACK: Advances in the development, efficiency, and application of auxiliary field and real-space variational and diffusion Quantum Monte Carlo
We review recent advances in the capabilities of the open source ab initio
Quantum Monte Carlo (QMC) package QMCPACK and the workflow tool Nexus used for
greater efficiency and reproducibility. The auxiliary field QMC (AFQMC)
implementation has been greatly expanded to include k-point symmetries,
tensor-hypercontraction, and accelerated graphical processing unit (GPU)
support. These scaling and memory reductions greatly increase the number of
orbitals that can practically be included in AFQMC calculations, increasing
accuracy. Advances in real space methods include techniques for accurate
computation of band gaps and for systematically improving the nodal surface of
ground state wavefunctions. Results of these calculations can be used to
validate application of more approximate electronic structure methods including
GW and density functional based techniques. To provide an improved foundation
for these calculations we utilize a new set of correlation-consistent effective
core potentials (pseudopotentials) that are more accurate than previous sets;
these can also be applied in quantum-chemical and other many-body applications,
not only QMC. These advances increase the efficiency, accuracy, and range of
properties that can be studied in both molecules and materials with QMC and
QMCPACK
Software engineering to sustain a high-performance computing scientific application: QMCPACK
We provide an overview of the software engineering efforts and their impact
in QMCPACK, a production-level ab-initio Quantum Monte Carlo open-source code
targeting high-performance computing (HPC) systems. Aspects included are: (i)
strategic expansion of continuous integration (CI) targeting CPUs, using GitHub
Actions runners, and NVIDIA and AMD GPUs in pre-exascale systems, using
self-hosted hardware; (ii) incremental reduction of memory leaks using
sanitizers, (iii) incorporation of Docker containers for CI and
reproducibility, and (iv) refactoring efforts to improve maintainability,
testing coverage, and memory lifetime management. We quantify the value of
these improvements by providing metrics to illustrate the shift towards a
predictive, rather than reactive, sustainable maintenance approach. Our goal,
in documenting the impact of these efforts on QMCPACK, is to contribute to the
body of knowledge on the importance of research software engineering (RSE) for
the sustainability of community HPC codes and scientific discovery at scale.Comment: Accepted at the first US-RSE Conference, USRSE2023,
https://us-rse.org/usrse23/, 8 pages, 3 figures, 4 table
Transductive Learning for Spatial Data Classification
Learning classifiers of spatial data presents several issues, such as the heterogeneity of spatial objects, the implicit definition of spatial relationships among objects, the spatial autocorrelation and the abundance of unlabelled data which potentially convey a large amount of information. The first three issues are due to the inherent structure of spatial units of analysis, which can be easily accommodated if a (multi-)relational data mining approach is considered. The fourth issue demands for the adoption of a transductive setting, which aims to make predictions for a given set of unlabelled data. Transduction is also motivated by the contiguity of the concept of positive autocorrelation, which typically affect spatial phenomena, with the smoothness assumption which characterize the transductive setting. In this work, we investigate a relational approach to spatial classification in a transductive setting. Computational solutions to the main difficulties met in this approach are presented. In particular, a relational upgrade of the nave Bayes classifier is proposed as discriminative model, an iterative algorithm is designed for the transductive classification of unlabelled data, and a distance measure between relational descriptions of spatial objects is defined in order to determine the k-nearest neighbors of each example in the dataset. Computational solutions have been tested on two real-world spatial datasets. The transformation of spatial data into a multi-relational representation and experimental results are reported and commented
"Now he walks and walks, as if he didn't have a home where he could eat": food, healing, and hunger in Quechua narratives of madness
In the Quechua-speaking peasant communities of southern Peru, mental disorder is understood less as individualized pathology and more as a disturbance in family and social relationships. For many Andeans, food and feeding are ontologically fundamental to such relationships. This paper uses data from interviews and participant observation in a rural province of Cuzco to explore the significance of food and hunger in local discussions of madness. Carersâ narratives, explanatory models, and theories of healing all draw heavily from idioms of food sharing and consumption in making sense of affliction, and these concepts structure understandings of madness that differ significantly from those assumed by formal mental health services. Greater awareness of the salience of these themes could strengthen the input of psychiatric and psychological care with this population and enhance knowledge of the alternative treatments that they use. Moreover, this case provides lessons for the global mental health movement on the importance of openness to the ways in which indigenous cultures may construct health, madness, and sociality. Such local meanings should be considered by mental health workers delivering services in order to provide care that can adjust to the alternative ontologies of sufferers and carers
Fast relational learning using bottom clause propositionalization with artificial neural networks
Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy
The James Webb Space Telescope Mission
Twenty-six years ago a small committee report, building on earlier studies,
expounded a compelling and poetic vision for the future of astronomy, calling
for an infrared-optimized space telescope with an aperture of at least .
With the support of their governments in the US, Europe, and Canada, 20,000
people realized that vision as the James Webb Space Telescope. A
generation of astronomers will celebrate their accomplishments for the life of
the mission, potentially as long as 20 years, and beyond. This report and the
scientific discoveries that follow are extended thank-you notes to the 20,000
team members. The telescope is working perfectly, with much better image
quality than expected. In this and accompanying papers, we give a brief
history, describe the observatory, outline its objectives and current observing
program, and discuss the inventions and people who made it possible. We cite
detailed reports on the design and the measured performance on orbit.Comment: Accepted by PASP for the special issue on The James Webb Space
Telescope Overview, 29 pages, 4 figure