26,152 research outputs found

    Pion-mass dependence of three-nucleon observables

    Full text link
    We use an effective field theory (EFT) which contains only short-range interactions to study the dependence of a variety of three-nucleon observables on the pion mass. The pion-mass dependence of input quantities in our ``pionless'' EFT is obtained from a recent chiral EFT calculation. To the order we work at, these quantities are the 1S0 scattering length and effective range, the deuteron binding energy, the 3S1 effective range, and the binding energy of one three-nucleon bound state. The chiral EFT input we use has the inverse 3S1 and 1S0 scattering lengths vanishing at mpi_c=197.8577 MeV. At this ``critical'' pion mass, the triton has infinitely many excited states with an accumulation point at the three-nucleon threshold. We compute the binding energies of these states up to next-to-next-to-leading order in the pionless EFT and study the convergence pattern of the EFT in the vicinity of the critical pion mass. Furthermore, we use the pionless EFT to predict how doublet and quartet nd scattering lengths depend on mpi in the region between the physical pion mass and mpi=mpi_c.Comment: 24 pages, 9 figure

    Adjusting process count on demand for petascale global optimization⋆

    Get PDF
    There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed

    DISPATCH: A Numerical Simulation Framework for the Exa-scale Era. I. Fundamentals

    Full text link
    We introduce a high-performance simulation framework that permits the semi-independent, task-based solution of sets of partial differential equations, typically manifesting as updates to a collection of `patches' in space-time. A hybrid MPI/OpenMP execution model is adopted, where work tasks are controlled by a rank-local `dispatcher' which selects, from a set of tasks generally much larger than the number of physical cores (or hardware threads), tasks that are ready for updating. The definition of a task can vary, for example, with some solving the equations of ideal magnetohydrodynamics (MHD), others non-ideal MHD, radiative transfer, or particle motion, and yet others applying particle-in-cell (PIC) methods. Tasks do not have to be grid-based, while tasks that are, may use either Cartesian or orthogonal curvilinear meshes. Patches may be stationary or moving. Mesh refinement can be static or dynamic. A feature of decisive importance for the overall performance of the framework is that time steps are determined and applied locally; this allows potentially large reductions in the total number of updates required in cases when the signal speed varies greatly across the computational domain, and therefore a corresponding reduction in computing time. Another feature is a load balancing algorithm that operates `locally' and aims to simultaneously minimise load and communication imbalance. The framework generally relies on already existing solvers, whose performance is augmented when run under the framework, due to more efficient cache usage, vectorisation, local time-stepping, plus near-linear and, in principle, unlimited OpenMP and MPI scaling.Comment: 17 pages, 8 figures. Accepted by MNRA

    Chiral extrapolation of light resonances from one and two-loop unitarized Chiral Perturbation Theory versus lattice results

    Get PDF
    We study the pion mass dependence of the rho(770) and f_0(600) masses and widths from one and two-loop unitarized Chiral Perturbation Theory. We show the consistency of one-loop calculations with lattice results for the M_rho, f_pi and the isospin 2 scattering length a_20.Then, we develop and apply the modified Inverse Amplitude Method formalism for two-loop ChPT. In contrast to the f_0(600), the rho(770) is rather sensitive to the two-loop ChPT parameters --our main source of systematic uncertainty. We thus provide two-loop unitarized fits constrained by lattice information on M_rho, f_pi, by the qqbar leading 1/N_c behavior of the rho and by existing estimates of low energy constants. These fits yield relatively stable predictions up to m_pi\simeq 300-350 MeV for the rho coupling and width as well as for all the f_0(600) parameters. We confirm, to two-loops, the weak m_pi dependence of the rho coupling and the KSRF relation, and the existence of two virtual f_0(600) poles for sufficiently high m_pi. At two loops one of these poles becomes a bound state when m_pi is somewhat larger than 300 MeV.Comment: 15 pages, to appear in Phys. Rev.

    Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS

    Full text link
    GROMACS is a widely used package for biomolecular simulation, and over the last two decades it has evolved from small-scale efficiency to advanced heterogeneous acceleration and multi-level parallelism targeting some of the largest supercomputers in the world. Here, we describe some of the ways we have been able to realize this through the use of parallelization on all levels, combined with a constant focus on absolute performance. Release 4.6 of GROMACS uses SIMD acceleration on a wide range of architectures, GPU offloading acceleration, and both OpenMP and MPI parallelism within and between nodes, respectively. The recent work on acceleration made it necessary to revisit the fundamental algorithms of molecular simulation, including the concept of neighborsearching, and we discuss the present and future challenges we see for exascale simulation - in particular a very fine-grained task parallelism. We also discuss the software management, code peer review and continuous integration testing required for a project of this complexity.Comment: EASC 2014 conference proceedin
    • …
    corecore