44 research outputs found

    The use of primitives in the calculation of radiative view factors

    Get PDF
    Compilations of radiative view factors (often in closed analytical form) are readily available in the open literature for commonly encountered geometries. For more complex three-dimensional (3D) scenarios, however, the effort required to solve the requisite multi-dimensional integrations needed to estimate a required view factor can be daunting to say the least. In such cases, a combination of finite element methods (where the geometry in question is sub-divided into a large number of uniform, often triangular, elements) and Monte Carlo Ray Tracing (MC-RT) has been developed, although frequently the software implementation is suitable only for a limited set of geometrical scenarios. Driven initially by a need to calculate the radiative heat transfer occurring within an operational fibre-drawing furnace, this research set out to examine options whereby MC-RT could be used to cost-effectively calculate any generic 3D radiative view factor using current vectorisation technologies

    Fast algorithm for real-time rings reconstruction

    Get PDF
    The GAP project is dedicated to study the application of GPU in several contexts in which real-time response is important to take decisions. The definition of real-time depends on the application under study, ranging from answer time of μs up to several hours in case of very computing intensive task. During this conference we presented our work in low level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6]. Apart from the study of dedicated solution to decrease the latency due to data transport and preparation, the computing algorithms play an essential role in any GPU application. In this contribution, we show an original algorithm developed for triggers application, to accelerate the ring reconstruction in RICH detector when it is not possible to have seeds for reconstruction from external trackers

    Advanced Simulation and Computing FY12-13 Implementation Plan, Volume 2, Revision 0.5

    Full text link

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    Enhancing Monte Carlo Particle Transport for Modern Many-Core Architectures

    Get PDF
    Since near the very beginning of electronic computing, Monte Carlo particle transport has been a fundamental approach for solving computational physics problems. Due to the high computational demands and inherently parallel nature of these applications, Monte Carlo transport applications are often performed in the supercomputing environment. That said, supercomputers are changing, as parallelism has dramatically increased with each supercomputer node, including regular inclusion of many-core devices. Monte Carlo transport, like all applications that run on supercomputers, will be forced to make significant changes to their designs in order to utilize these new architectures effectively. This dissertation presents solutions for central challenges that face Monte Carlo particle transport in this changing environment, specifically in the areas of threading models, tracking algorithms, tally data collection, and heterogenous load balancing. In addition, the dissertation culminates with a study that combines all of the presented techniques in a production application at scale on Lawrence Livermore National Laboratory's RZAnsel Supercomputer

    Large Scale Computing and Storage Requirements for High Energy Physics

    Full text link

    Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Full text link

    Remnants of compact binary mergers and next-generation numerical relativity codes

    Get PDF
    Numerical relativity (NR) simulations are crucial for studying the coalescence of compact binaries. Based on NR data, we produce a model for the mass and spin of the remnant black hole (BH) for the coalescence of black hole-neutron star systems, discussing its crucial role in gravitational wave (GW) modeling and in the parameter estimation of the two signals GW200105 and GW200115. In the context of binary neutron star merger simulations, we perform the first systematic study comparing results obtained with various neutrino treatments, the presence of turbulent viscosity and different grid resolutions. We find that the time of BH formation after merger is heavily affected by grid resolution and turbulent viscosity. An early BH formation limits matter ejection from the accretion disc, as the BH swallows a significant portion of it. Our results indicate that more reliable kilonova light curves are obtained only if the various ejecta components are present. Moreover, robust r-process nucleosynthesis yields require inclusion of both neutrino emission and reabsorption in simulations. Advanced neutrino schemes and turbulent viscosity in simulations resolved beyond current standards appear necessary for reliable astrophysical predictions. To carry out computationally demanding simulations of growing complexity, next-generation NR codes that can efficiently leverage the latest pre-exascale many-core and heterogeneous infrastructures are required. To this end we develop GR-Athena++, a new dynamical spacetime solver built on top of Athena++, that shows high-order convergence properties and excellent parallel scalability up to O(105) cores in full 3D binary black hole (BBH) merger simulations. Finally we present GR-AthenaK, the first performance-portable spacetime solver, obtained by refactoring GR-Athena++ with the Kokkos programming model. We demonstrate the correctness and convergence properties of GR-AthenaK with BBH runs on GPUs. GR-AthenaK shows a speedup ∼50 on one GPU compared to GR-Athena++ on a single CPU core
    corecore