865 research outputs found

    Calculation of the nucleon axial charge in lattice QCD

    Get PDF
    Protons and neutrons have a rich structure in terms of their constituents, the quarks and gluons. Understanding this structure requires solving Quantum Chromodynamics (QCD). However QCD is extremely complicated, so we must numerically solve the equations of QCD using a method known as lattice QCD. Here we describe a typical lattice QCD calculation by examining our recent computation of the nucleon axial charge.Comment: Prepared for Scientific Discovery through Advanced Computing (SciDAC 2006), Denver, Colorado, June 25-29 200

    Toward Five-dimensional Core-collapse Supernova Simulations

    Full text link
    The computational difficulty of six-dimensional neutrino radiation hydrodynamics has spawned a variety of approximations, provoking a long history of uncertainty in the core-collapse supernova explosion mechanism. Under the auspices of the Terascale Supernova Initiative, we are honoring the physical complexity of supernovae by meeting the computational challenge head-on, undertaking the development of a new adaptive mesh refinement code for self-gravitating, six-dimensional neutrino radiation magnetohydrodynamics. This code--called {\em GenASiS,} for {\em Gen}eral {\em A}strophysical {\em Si}mulation {\em S}ystem--is designed for modularity and extensibility of the physics. Presently in use or under development are capabilities for Newtonian self-gravity, Newtonian and special relativistic magnetohydrodynamics (with `realistic' equation of state), and special relativistic energy- and angle-dependent neutrino transport--including full treatment of the energy and angle dependence of scattering and pair interactions.Comment: 5 pages. Proceedings of SciDAC 2005, Scientific Discovery through Advanced Computing, San Francisco, CA, 26-30 June 200

    Computational issues and algorithm assessment for shock/turbulence interaction problems

    Get PDF
    The paper provides an overview of the challenges involved in the computation of flows with interactions between turbulence, strong shockwaves, and sharp density interfaces. The prediction and physics of such flows is the focus of an ongoing project in the Scientific Discovery through Advanced Computing (SciDAC) program. While the project is fundamental in nature, there are many important potential applications of scientific and engineering interest ranging from inertial confinement fusion to exploding supernovae. The essential challenges will be discussed, and some representative numerical results that highlight these challenges will be shown. In addition, the overall approach taken in this project will be outlined

    Lattice Simulations of the Thermodynamics of Strongly Interacting Elementary Particles and the Exploration of New Phases of Matter in Relativistic Heavy Ion Collisions

    Get PDF
    At high temperatures or densities matter formed by strongly interacting elementary particles (hadronic matter) is expected to undergo a transition to a new form of matter - the quark gluon plasma - in which elementary particles (quarks and gluons) are no longer confined inside hadrons but are free to propagate in a thermal medium much larger in extent than the typical size of a hadron. The transition to this new form of matter as well as properties of the plasma phase are studied in large scale numerical calculations based on the theory of strong interactions - Quantum Chromo Dynamics (QCD). Experimentally properties of hot and dense elementary particle matter are studied in relativistic heavy ion collisions such as those currently performed at the relativistic heavy ion collider (RHIC) at BNL. We review here recent results from studies of thermodynamic properties of strongly interacting elementary particle matter performed on Teraflops-Computer. We present results on the QCD equation of state and discuss the status of studies of the phase diagram at non-vanishing baryon number density.Comment: 10 pages, invited plenary talk given at the conference 'Scientific Discovery through Advanced Computing', SciDAC 2006, June 25-29, Denver, US

    Globally convergent algorithms for finding zeros of duplomonotone mappings

    Get PDF
    We introduce a new class of mappings, called duplomonotone, which is strictly broader than the class of monotone mappings. We study some of the main properties of duplomonotone functions and provide various examples, including nonlinear duplomonotone functions arising from the study of systems of biochemical reactions. Finally, we present three variations of a derivative-free line search algorithm for finding zeros of systems of duplomonotone equations, and we prove their linear convergence to a zero of the function.This work was supported by the National Research Fund, Luxembourg, co-funded under the Marie Curie Actions of the European Commission (FP7-COFUND), and by the U.S. Department of Energy, Offices of Advanced Scientific Computing Research and the Biological and Environmental Research as part of the Scientific Discovery Through Advanced Computing program, grant #DE-SC0010429

    Accelerating NOvA's Feldman-Cousins procedure using high performance computing platforms

    Get PDF
    2019 Spring.Includes bibliographical references.In order to assess the compatibility between models containing physically constrained parameters and small-signal data, uncertainties often must be calculated by Monte Carlo simulation to account for non-normally distributed errors. This is the case for neutrino oscillation experiments, where neutrino-matter weak interactions are rare and beam intensity at the far site is low. The NuMI Off-axis νe Appearance (NOvA) collaboration attempts to measure the parameters governing neutrino oscillations within the PMNS oscillation model by comparing model predictions to a small data set of neutrino interactions. To account for non-normality, NOvA uses the computationally intensive Feldman-Cousins (FC) procedure, which involves fitting thousands of independent pseudoexperiments to generate empirical distribution functions that are used to calculate the significance of observations. I, along with collaborators on NOvA and Scientific Discovery through Advanced Computing: High Energy Physics Data Analytics (SciDAC-4) collaborations, have implemented the FC procedure utilizing the High Performance Computing (HPC) facilities at the National Energy Research Scientific Computing Center (NERSC). With this implementation, we have successfully processed NOvA's complete FC corrections for our recent neutrino + antineutrino appearance analysis in 36 hours: a speedup factor of 50 as compared to the methods used in previous analyses

    Parallel Local Approximation MCMC for Expensive Models

    Get PDF
    Performing Bayesian inference via Markov chain Monte Carlo (MCMC) can be exceedingly expensive when posterior evaluations invoke the evaluation of a computationally expensive model, such as a system of PDEs. In recent work [J. Amer. Statist. Assoc., 111 (2016), pp. 1591-1607] we described a framework for constructing and refining local approximations of such models during an MCMC simulation. These posterior-adapted approximations harness regularity of the model to reduce the computational cost of inference while preserving asymptotic exactness of the Markov chain. Here we describe two extensions of that work. First, we prove that samplers running in parallel can collaboratively construct a shared posterior approximation while ensuring ergodicity of each associated chain, providing a novel opportunity for exploiting parallel computation in MCMC. Second, focusing on the Metropolis-adjusted Langevin algorithm, we describe how a proposal distribution can successfully employ gradients and other relevant information extracted from the approximation. We investigate the practical performance of our approach using two challenging inference problems, the first in subsurface hydrology and the second in glaciology. Using local approximations constructed via parallel chains, we successfully reduce the run time needed to characterize the posterior distributions in these problems from days to hours and from months to days, respectively, dramatically improving the tractability of Bayesian inference.United States. Department of Energy. Office of Science. Scientific Discovery through Advanced Computing (SciDAC) Program (award DE-SC0007099)Natural Sciences and Engineering Research Council of CanadaUnited States. Office of Naval Researc

    The Open Science Grid Status and Architecture The Open Science Grid Executive Board on behalf of the OSG Consortium: Ruth Pordes, Don Petravick: Fermi National Accelerator Laboratory

    Get PDF
    Abstract. The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. The OSG project[1] is funded by the National Science Foundation and the Department of Energy Scientific Discovery through Advanced Computing program. The OSG project provides specific activities for the operation and evolution of the common infrastructure. The US ATLAS and US CMS collaborations contribute to and depend on OSG as the US infrastructure contributing to the World Wide LHC Computing Grid on which the LHC experiments distribute and analyze their data. Other stakeholders include the STAR RHIC experiment, the Laser Interferometer Gravitational-Wave Observatory (LIGO), the Dark Energy Survey (DES) and several Fermilab Tevatron experiments-CDF, D0, MiniBoone etc. The OSG implementation architecture brings a pragmatic approach to enabling vertically integrated community specific distributed systems over a common horizontal set of shared resources and services. More information can be found at the OSG web site: www.opensciencegrid.org
    corecore