159 research outputs found

    Roadmap on Electronic Structure Codes in the Exascale Era

    Get PDF
    Electronic structure calculations have been instrumental in providing many important insights into a range of physical and chemical properties of various molecular and solid-state systems. Their importance to various fields, including materials science, chemical sciences, computational chemistry and device physics, is underscored by the large fraction of available public supercomputing resources devoted to these calculations. As we enter the exascale era, exciting new opportunities to increase simulation numbers, sizes, and accuracies present themselves. In order to realize these promises, the community of electronic structure software developers will however first have to tackle a number of challenges pertaining to the efficient use of new architectures that will rely heavily on massive parallelism and hardware accelerators. This roadmap provides a broad overview of the state-of-the-art in electronic structure calculations and of the various new directions being pursued by the community. It covers 14 electronic structure codes, presenting their current status, their development priorities over the next five years, and their plans towards tackling the challenges and leveraging the opportunities presented by the advent of exascale computing

    Space Decomposition Based Parallelisation Solutions for the Combined Finite-Discrete Element Method in 2D.

    Get PDF
    PhDThe Combined Finite-Discrete Element Method (FDEM), originally invented by Munjiza, has become a tool of choice for problems of discontinua, where particles are deformable and can fracture or fragment. The downside of FDEM is that it is CPU intensive and, as a consequence, it is difficult to analyse large scale problems on sequential CPU hardware and parallelisation becomes necessary. In this work a novel approach for parallelisation of the combined finite-discrete element method (FDEM) in 2D aimed at clusters and desktop computers is developed. Dynamic domain decomposition-based parallelisation solvers covering all aspects of FDEM have been developed. These have been implemented into the open source Y2D software package by using a Message-Passing Interface (MPI) and have been tested on a PC cluster. The overall performance and scalability of the parallel code has been studied using numerical examples. The state of the art, the proposed solvers and the test results are described in the thesis in detail.

    SPH modeling of water-related natural hazards

    Get PDF
    This paper collects some recent smoothed particle hydrodynamic (SPH) applications in the field of natural hazards connected to rapidly varied flows of both water and dense granular mixtures including sediment erosion and bed load transport. The paper gathers together and outlines the basic aspects of some relevant works dealing with flooding on complex topography, sediment scouring, fast landslide dynamics, and induced surge wave. Additionally, the preliminary results of a new study regarding the post-failure dynamics of rainfall-induced shallow landslide are presented. The paper also shows the latest advances in the use of high performance computing (HPC) techniques to accelerate computational fluid dynamic (CFD) codes through the efficient use of current computational resources. This aspect is extremely important when simulating complex three-dimensional problems that require a high computational cost and are generally involved in the modeling of water-related natural hazards of practical interest. The paper provides an overview of some widespread SPH free open source software (FOSS) codes applied to multiphase problems of theoretical and practical interest in the field of hydraulic engineering. The paper aims to provide insight into the SPH modeling of some relevant physical aspects involved in water-related natural hazards (e.g., sediment erosion and non-Newtonian rheology). The future perspectives of SPH in this application field are finally pointed out

    Fast Monte Carlo Simulations for Quality Assurance in Radiation Therapy

    Get PDF
    Monte Carlo (MC) simulation is generally considered to be the most accurate method for dose calculation in radiation therapy. However, it suffers from the low simulation efficiency (hours to days) and complex configuration, which impede its applications in clinical studies. The recent rise of MRI-guided radiation platform (e.g. ViewRay’s MRIdian system) brings urgent need of fast MC algorithms because the introduced strong magnetic field may cause big errors to other algorithms. My dissertation focuses on resolving the conflict between accuracy and efficiency of MC simulations through 4 different approaches: (1) GPU parallel computation, (2) Transport mechanism simplification, (3) Variance reduction, (4) DVH constraint. Accordingly, we took several steps to thoroughly study the performance and accuracy influence of these methods. As a result, three Monte Carlo simulation packages named gPENELOPE, gDPMvr and gDVH were developed for subtle balance between performance and accuracy in different application scenarios. For example, the most accurate gPENELOPE is usually used as golden standard for radiation meter model, while the fastest gDVH is usually used for quick in-patient dose calculation, which significantly reduces the calculation time from 5 hours to 1.2 minutes (250 times faster) with only 1% error introduced. In addition, a cross-platform GUI integrating simulation kernels and 3D visualization was developed to make the toolkit more user-friendly. After the fast MC infrastructure was established, we successfully applied it to four radiotherapy scenarios: (1) Validate the vender provided Co60 radiation head model by comparing the dose calculated by gPENELOPE to experiment data; (2) Quantitatively study the effect of magnetic field to dose distribution and proposed a strategy to improve treatment planning efficiency; (3) Evaluate the accuracy of the build-in MC algorithm of MRIdian’s treatment planning system. (4) Perform quick quality assurance (QA) for the “online adaptive radiation therapy” that doesn’t permit enough time to perform experiment QA. Many other time-sensitive applications (e.g. motional dose accumulation) will also benefit a lot from our fast MC infrastructure
    • …
    corecore