9 research outputs found

    Tabulated equations of state with a many-tasking execution model

    No full text
    Abstract—The addition of nuclear and neutrino physics to general relativistic fluid codes allows for a more realistic description of hot nuclear matter in neutron star and black hole systems. This additional microphysics requires that each processor have access to large tables of data, such as equations of state, and in large simulations, the memory required to store these tables locally can become excessive unless an alternative execution model is used. In this work we present relativistic fluid evolutions of a neutron star obtained using a message driven multi-threaded execution model known as ParalleX. The goal of this work is to reduce the negative performance impact of distributing the tables. We introduce a component based on the notion of a “future”, or nonblocking encapsulated delayed computation, for accessing large tables of data, including outof-core sized tables. The proposed technique does not impose substantial memory overhead and can hide increased network latency. Keywords-Astrophysics applications, ParalleX, HPX, Futures I

    paullric/tempestmodel: DCMIP2016 Release

    No full text
    Tempest atmosphere / Earth-system mode

    Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars

    Get PDF
    We present a highly scalable demonstration of a portable asynchronous many-task programming model and runtime system applied to a grid-based adaptive mesh refinement hydrodynamic simulation of a double white dwarf merger with 14 levels of refinement that spans 17 orders of magnitude in astrophysical densities. The code uses the portable C++ parallel programming model that is embodied in the HPX library and being incorporated into the ISO C++ standard. The model represents a significant shift from existing bulk synchronous parallel programming models under consideration for exascale systems. Through the use of the Futurization technique, seemingly sequential code is transformed into wait-free asynchronous tasks. We demonstrate the potential of our model by showing results from strong scaling runs on National Energy Research Scientific Computing Center’s Cori system (658,784 Intel Knight’s Landing cores) that achieve a parallel efficiency of 96.8% using billions of asynchronous tasks
    corecore