455,893 research outputs found

    A benchmark study on mantle convection in a 3-D spherical shell using CitcomS

    Get PDF
    As high-performance computing facilities and sophisticated modeling software become available, modeling mantle convection in a three-dimensional (3-D) spherical shell geometry with realistic physical parameters and processes becomes increasingly feasible. However, there is still a lack of comprehensive benchmark studies for 3-D spherical mantle convection. Here we present benchmark and test calculations using a finite element code CitcomS for 3-D spherical convection. Two classes of model calculations are presented: the Stokes' flow and thermal and thermochemical convection. For Stokes' flow, response functions of characteristic flow velocity, topography, and geoid at the surface and core-mantle boundary (CMB) at different spherical harmonic degrees are computed using CitcomS and are compared with those from analytic solutions using a propagator matrix method. For thermal and thermochemical convection, 24 cases are computed with different model parameters including Rayleigh number (7 × 10^3 or 10^5) and viscosity contrast due to temperature dependence (1 to 10^7). For each case, time-averaged quantities at the steady state are computed, including surface and CMB Nussult numbers, RMS velocity, averaged temperature, and maximum and minimum flow velocity, and temperature at the midmantle depth and their standard deviations. For thermochemical convection cases, in addition to outputs for thermal convection, we also quantified entrainment of an initially dense component of the convection and the relative errors in conserving its volume. For nine thermal convection cases that have small viscosity variations and where previously published results were available, we find that the CitcomS results are mostly consistent with these previously published with less than 1% relative differences in globally averaged quantities including Nussult numbers and RMS velocities. For other 15 cases with either strongly temperature-dependent viscosity or thermochemical convection, no previous calculations are available for comparison, but these 15 test calculations from CitcomS are useful for future code developments and comparisons. We also presented results for parallel efficiency for CitcomS, showing that the code achieves 57% efficiency with 3072 cores on Texas Advanced Computing Center's parallel supercomputer Ranger

    GPU ACCELERATION OF THE ISO–7 NUCLEAR REACTION NETWORK USING OPENCL

    Get PDF
    We looked at the potential performance increases available through OpenCL and its parallel computing capabilities, including GPU computing as it applies to time inte- gration of nuclear reaction networks. The particular method chosen in this work was the trapezoidal BDF-2 method using Picard iteration, which is a non-linear second order method. Nuclear reaction network integration by itself is a sequential process and not easily accelerated via parallel computation. However, in tackling a problem like modeling supernova dynamics, a spatial discretization of the volume of the star necessary, and in many cases is combined with the computational technique of oper- ator splitting. Every spatial cell would have its own reaction network independent of the others, which is where the parallel computation would prove useful. The partic- ular reaction network analyzed is called the iso–7 reaction network that looks at the dynamics of 7 of the more dominant nuclides in supernovae. The computational per- formance was compared between the CPU and the GPU, in which the GPU showed performance increases of up to 8 times. This increase was realized on the small–scale, because the computations were limited to running on a single device at any given time. However, these performance gains would only increase as the problem size was scaled up to the large–scale

    Simulating Turbulence Using the Astrophysical Discontinuous Galerkin Code TENET

    Full text link
    In astrophysics, the two main methods traditionally in use for solving the Euler equations of ideal fluid dynamics are smoothed particle hydrodynamics and finite volume discretization on a stationary mesh. However, the goal to efficiently make use of future exascale machines with their ever higher degree of parallel concurrency motivates the search for more efficient and more accurate techniques for computing hydrodynamics. Discontinuous Galerkin (DG) methods represent a promising class of methods in this regard, as they can be straightforwardly extended to arbitrarily high order while requiring only small stencils. Especially for applications involving comparatively smooth problems, higher-order approaches promise significant gains in computational speed for reaching a desired target accuracy. Here, we introduce our new astrophysical DG code TENET designed for applications in cosmology, and discuss our first results for 3D simulations of subsonic turbulence. We show that our new DG implementation provides accurate results for subsonic turbulence, at considerably reduced computational cost compared with traditional finite volume methods. In particular, we find that DG needs about 1.8 times fewer degrees of freedom to achieve the same accuracy and at the same time is more than 1.5 times faster, confirming its substantial promise for astrophysical applications.Comment: 21 pages, 7 figures, to appear in Proceedings of the SPPEXA symposium, Lecture Notes in Computational Science and Engineering (LNCSE), Springe

    OPENMENDEL: A Cooperative Programming Project for Statistical Genetics

    Full text link
    Statistical methods for genomewide association studies (GWAS) continue to improve. However, the increasing volume and variety of genetic and genomic data make computational speed and ease of data manipulation mandatory in future software. In our view, a collaborative effort of statistical geneticists is required to develop open source software targeted to genetic epidemiology. Our attempt to meet this need is called the OPENMENDELproject (https://openmendel.github.io). It aims to (1) enable interactive and reproducible analyses with informative intermediate results, (2) scale to big data analytics, (3) embrace parallel and distributed computing, (4) adapt to rapid hardware evolution, (5) allow cloud computing, (6) allow integration of varied genetic data types, and (7) foster easy communication between clinicians, geneticists, statisticians, and computer scientists. This article reviews and makes recommendations to the genetic epidemiology community in the context of the OPENMENDEL project.Comment: 16 pages, 2 figures, 2 table
    • …
    corecore