2 research outputs found

    Plasma Physics Computations on Emerging Hardware Architectures

    Get PDF
    This thesis explores the potential of emerging hardware architectures to increase the impact of high performance computing in fusion plasma physics research. For next generation tokamaks like ITER, realistic simulations and data-processing tasks will become significantly more demanding of computational resources than current facilities. It is therefore essential to investigate how emerging hardware such as the graphics processing unit (GPU) and field-programmable gate array (FPGA) can provide the required computing power for large data-processing tasks and large scale simulations in plasma physics specific computations. The use of emerging technology is investigated in three areas relevant to nuclear fusion: (i) a GPU is used to process the large amount of raw data produced by the synthetic aperture microwave imaging (SAMI) plasma diagnostic, (ii) the use of a GPU to accelerate the solution of the Bateman equations which model the evolution of nuclide number densities when subjected to neutron irradiation in tokamaks, and (iii) an FPGA-based dataflow engine is applied to compute massive matrix multiplications, a feature of many computational problems in fusion and more generally in scientific computing. The GPU data processing code for SAMI provides a 60x acceleration over the previous IDL-based code, enabling inter-shot analysis in future campaigns and the data-mining (and therefore analysis) of stored raw data from previous MAST campaigns. The feasibility of porting the whole Bateman solver to a GPU system is demonstrated and verified against the industry standard FISPACT code. Finally a dataflow approach to matrix multiplication is shown to provide a substantial acceleration compared to CPU-based approaches and, whilst not performing as well as a GPU for this particular problem, is shown to be much more energy efficient. Emerging hardware technologies will no doubt continue to provide a positive contribution in terms of performance to many areas of fusion research and several exciting new developments are on the horizon with tighter integration of GPUs and FPGAs with their host central processor units. This should not only improve performance and reduce data transfer bottlenecks, but also allow more user-friendly programming tools to be developed. All of this has implications for ITER and beyond where emerging hardware technologies will no doubt provide the key to delivering the computing power required to handle the large amounts of data and more realistic simulations demanded by these complex systems

    The way towards thermonuclear fusion simulators

    No full text
    Abstract In parallel to the ITER project itself, many initiatives address complementary technological issues relevant to a fusion reactor, as well as many remaining scientific issues. One of the next decade's scientific challenges consists of merging the scientific knowledge accumulated during the past 40 years into a reliable set of validated simulation tools, accessible and useful for ITER prediction and interpretation activity, as well as for the conceptual design of the future reactors. Obviously such simulators involve a high degree of “integration” in several respects: integration of multi-space, multi-scale (time and space) physics, integration of physics and technology models, inter-discipline integration etc. This very distinctive feature, in the framework of a rather long term and world-wide activity, constrains strongly the choices to be made at all levels of developments. A European task force on integrated tokamak modelling has been activated with the long-term aim of providing the EU with a set of codes necessary for preparing and analysing future ITER discharges, with the highest degree of flexibility and reliability. In parallel with the development of simulation tools and software environment, the long term evolution of hardware needs is also discussed at several levels (EU, EU–Japan broader approach, high performance computing, grid technology, data access, etc.), and progress in this domain is reported. Finally, the ITM task force is also working out the worldwide compatibility through regular collaboration with the similar integrated modelling structures which already exist or are being put in place by the other ITER partners
    corecore