1,206 research outputs found
Accelerating Dynamical Density Response Code on Summit and Its Application for Computing the Density Response Function of Vanadium Sesquioxide
This thesis details the process of porting the Eguiluz group dynamical density response computational platform to the hybrid CPU+GPU environment at the Summit supercomputer at Oak Ridge National Laboratory (ORNL) Leadership Computing Center. The baseline CPU-only version is a Gordon Bell-winning platform within the formally-exact time-dependent density functional theory (TD-DFT) framework using the linearly augmented plane wave (LAPW) basis set. The code is accelerated using a combination of the OpenACC programming model and GPU libraries -- namely, the Matrix Algebra for GPU and Multicore Architectures (MAGMA) library -- as well as exploiting the sparsity pattern of the matrices involved in the matrix-matrix multiplication. Benchmarks show a 12.3x speedup compared to the CPU-only version. This performance boost should accelerate discovery in material and condensed matter physics through computational means. After the hybrid CPU+GPU code has been sufficiently optimized, it is used to study the dynamical density response function of vanadium sesquioxide, and the results are compared with spectroscopic data from non-resonant inelastic X-ray scattering {NIXS} experiments
Recommended from our members
Self-reported pregnancy exposures and placental DNA methylation in the MARBLES prospective autism sibling study.
Human placenta is a fetal-derived tissue that offers a unique sample of epigenetic and environmental exposures present in utero. In the MARBLES prospective pregnancy study of high-risk younger siblings of children with autism spectrum disorder (ASD), pregnancy and environmental factors collected by maternal interviews were examined as predictors of placental DNA methylation, including partially methylated domains (PMDs), an embryonic feature of the placental methylome. DNA methylation data from MethylC-seq analysis of 47 placentas of children clinically diagnosed at 3 years with ASD or typical development using standardized assessments were examined in relation to: child's gestational age, birth-weight, and diagnosis; maternal pre-pregnancy body mass index, smoking, education, parity, height, prenatal vitamin and folate intake; home ownership; pesticides professionally applied to lawns or gardens or inside homes, pet flea/tick pouches, collars, or soaps/shampoos used in the 3 months prior to or during pregnancy. Sequencing run, order, and coverage, and child race and sex were considered as potential confounders. Akaike information criterion was used to select the most parsimonious among candidate models. Final prediction models used sandwich estimators to produce homoscadisticity-robust estimates of the 95% confidence interval (CI) and P-values controlled the false discovery rate at 5%. The strongest, most robust associations were between pesticides professionally applied outside the home and higher average methylation over PMDs [0.45 (95% CI 0.17, 0.72), P = 0.03] and a reduced proportion of the genome in PMDs [-0.42 (95% CI - 0.67 to -0.17), P = 0.03]. Pesticide exposures could alter placental DNA methylation more than other factors
Dynamics of Temporal Difference Reinforcement Learning
Reinforcement learning has been successful across several applications in
which agents have to learn to act in environments with sparse feedback.
However, despite this empirical success there is still a lack of theoretical
understanding of how the parameters of reinforcement learning models and the
features used to represent states interact to control the dynamics of learning.
In this work, we use concepts from statistical physics, to study the typical
case learning curves for temporal difference learning of a value function with
linear function approximators. Our theory is derived under a Gaussian
equivalence hypothesis where averages over the random trajectories are replaced
with temporally correlated Gaussian feature averages and we validate our
assumptions on small scale Markov Decision Processes. We find that the
stochastic semi-gradient noise due to subsampling the space of possible
episodes leads to significant plateaus in the value error, unlike in
traditional gradient descent dynamics. We study how learning dynamics and
plateaus depend on feature structure, learning rate, discount factor, and
reward function. We then analyze how strategies like learning rate annealing
and reward shaping can favorably alter learning dynamics and plateaus. To
conclude, our work introduces new tools to open a new direction towards
developing a theory of learning dynamics in reinforcement learning
Fidelity-Weighted Learning
Training deep neural networks requires many training samples, but in practice
training labels are expensive to obtain and may be of varying quality, as some
may be from trusted expert labelers while others might be from heuristics or
other sources of weak supervision such as crowd-sourcing. This creates a
fundamental quality versus-quantity trade-off in the learning process. Do we
learn from the small amount of high-quality data or the potentially large
amount of weakly-labeled data? We argue that if the learner could somehow know
and take the label-quality into account when learning the data representation,
we could get the best of both worlds. To this end, we propose
"fidelity-weighted learning" (FWL), a semi-supervised student-teacher approach
for training deep neural networks using weakly-labeled data. FWL modulates the
parameter updates to a student network (trained on the task we care about) on a
per-sample basis according to the posterior confidence of its label-quality
estimated by a teacher (who has access to the high-quality labels). Both
student and teacher are learned from the data. We evaluate FWL on two tasks in
information retrieval and natural language processing where we outperform
state-of-the-art alternative semi-supervised methods, indicating that our
approach makes better use of strong and weak labels, and leads to better
task-dependent data representations.Comment: Published as a conference paper at ICLR 201
The use of aggregate from demolition rubble in the making of ordinary and structural concretes
The aim of this thesis is to introduce the concept of recycling demolished concrete as aggregate which is then used in fresh concrete - to be known as "recycled concrete". Various aspects of concrete technology are covered and in this way recycled concrete is compared to conventional concrete. The work was performed in three phases, and it should serve as a guide to prospective users. Phase 1: Various recycled aggregates were tested according to standard specifications and were found to be satisfactory in most aspects. Recycled fine aggregate is very coarse though, and should be used with caution. The absorption and porosity of recycled aggregates should always be determined to enable their use in concrete. The specific gravity of such an aggregate should also be found to enable more accurate mix calculations. The highest compressive strengths normally possible for recycled concretes are between 56 and 71 MPa, but an average strength of 50 MPa should not be exceeded without thorough investigation, even though it is easily attainable. Phase 2: A wet-batching method of mix design was investigated and satisfactory recycled concretes were produced. Strength charts for such concretes are given. Methods of dry-batching are also presented, but are more complex than the wet-batch method. The water demand of recycled· fine aggregates was found to be considerably higher than for natural sands, and again the use of fine recycled aggregate should be carefully considered. Phase 3: The mechanical properties of recycled concretes were tested and Little difference found between recycled and conventional concretes. The compressive strengths were satisfactory and the elastic moduli sufficiently high, even though they were 15 to 20 percent Lower than those of corresponding dense concretes. The shrinkage of recycled concrete is comparable to that of conventional concrete, and the creep potential somewhat greater, although not excessivly so. The use of recycled coarse aggregate in both plain and structural concrete is then recommended as an alternative to the dwindling supply of natural aggregates. The use of recycled fine aggregate, however, is not recommended, although its use in Low-grade or mass concrete is condoned
Gradual Optimization Learning for Conformational Energy Minimization
Molecular conformation optimization is crucial to computer-aided drug
discovery and materials design. Traditional energy minimization techniques rely
on iterative optimization methods that use molecular forces calculated by a
physical simulator (oracle) as anti-gradients. However, this is a
computationally expensive approach that requires many interactions with a
physical simulator. One way to accelerate this procedure is to replace the
physical simulator with a neural network. Despite recent progress in neural
networks for molecular conformation energy prediction, such models are prone to
distribution shift, leading to inaccurate energy minimization. We find that the
quality of energy minimization with neural networks can be improved by
providing optimization trajectories as additional training data. Still, it
takes around additional conformations to match the physical
simulator's optimization quality. In this work, we present the Gradual
Optimization Learning Framework (GOLF) for energy minimization with neural
networks that significantly reduces the required additional data. The framework
consists of an efficient data-collecting scheme and an external optimizer. The
external optimizer utilizes gradients from the energy prediction model to
generate optimization trajectories, and the data-collecting scheme selects
additional training data to be processed by the physical simulator. Our results
demonstrate that the neural network trained with GOLF performs on par with the
oracle on a benchmark of diverse drug-like molecules using x less
additional data.Comment: 17 pages, 5 figure
DSPSR: Digital Signal Processing Software for Pulsar Astronomy
DSPSR is a high-performance, open-source, object-oriented, digital signal
processing software library and application suite for use in radio pulsar
astronomy. Written primarily in C++, the library implements an extensive range
of modular algorithms that can optionally exploit both multiple-core processors
and general-purpose graphics processing units. After over a decade of research
and development, DSPSR is now stable and in widespread use in the community.
This paper presents a detailed description of its functionality, justification
of major design decisions, analysis of phase-coherent dispersion removal
algorithms, and demonstration of performance on some contemporary
microprocessor architectures.Comment: 15 pages, 10 figures, to be published in PAS
- …