24 research outputs found

    Grid Computing for Fusion Research

    Get PDF

    Modelling of Fast Ion Losses in Tokamaks

    No full text

    Orbit-following simulation of fast ions in ASDEX upgrade tokamak in the presence of magnetic ripple and radial electric field

    Get PDF
    Magnetic confinement of plasma inside a tokamak is presently the most promising form of controlled fusion. A key issue for future fusion devices such as ITER is the interaction between the hot plasma and the cold material surfaces. The density control and exhaust of impurities must be effected in a way not causing excessive heat and particle loads. Edge localized modes (ELMs), intermittent bursts of energy and particles, characterize the standard high confinement (H)-mode. In the recently discovered quiescent H-mode (QH-mode), they are replaced by so-called edge harmonic oscillations of a more continuous nature. The QH-mode is obtained only with counter-injected neutral beams, indicating that fast ions may affect the edge stability properties and thus ELMs. In this thesis, the neutral-beam-originated fast ions in ASDEX Upgrade tokamak are modelled using the orbit-following Monte Carlo code ASCOT. The modelling results include the edge fast ion slowing-down distribution and the surface loads caused by fast ion losses for co- and counter-injected neutral beams, corresponding to H-mode and QH-mode, respectively. The effects of magnetic field ripple, arising from the finite number of toroidal field coils, and radial electric field >Er are included in the analysis. In addition to neutral beam ions, also the relation of surface distribution of tritium and the flux of tritons created in deuterium-deuterium fusion reactions is addressed. Due to the difference in the direction of the gradient drift, counter-injected beams are prone to higher losses than co-injected beams. This leads to substantial wall loads, but also to higher edge fast ion density. Also the distribution of the fast ions in velocity space is different. The ripple-induced stochastic diffusion increases the losses, thereby increasing the wall load and reducing the density. The orbit width effects, squeezing for counter-injected and widening for co-injected beams, and orbit transitions caused by >Er further increase the losses and wall load. Nevertheless, they also lead to higher edge fast ion density and changes in the velocity distribution. The obtained 4D distribution functions could be used for gaining insight into the roots of the QH-mode by analyzing the stability properties of the edge for the two injection directions

    Heiz- und Stromprofile bei Neutralteilcheninjektion in Tokamakplasmen

    No full text

    Heiz- und Stromprofile bei Neutralteilcheninjektion in Tokamakplasmen

    No full text

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    Montera: A Framework for Efficient Execution of Monte Carlo Codes on Grid Infrastructures

    Get PDF
    he objective of this work is to improve the performance of Monte Carlo codes on Grid production infrastructures. To do so, the codes and the grid sites are characterized with simple parameters to model their behaviors. Then, a new performance model for grid infrastructures is proposed, and an algorithm that employs this information is described. This algorithm dynamically calculates the number and size of tasks to execute on each site to maximize the performance and reduce makespan. Finally, a newly developed framework called Montera is presented. Montera deals with the execution of Monte Carlo codes in an unattended way, isolating the complexity of the problem from the final user. By employing two fusion Monte Carlo codes as example cases, along with the described characterizations and scheduling algorithm, a performance improvement up to 650 % over current best results is obtained on a real production infrastructure, together with enhanced stability and robustness

    DTT NBI fast particle modelling with Monte Carlo ASCOT code

    Get PDF
    The present thesis deals with the analysis and modelling of the behavior of Energetic Particles (EPs) injected by a neutral beam in a tokamak plasma. By Neutral Beam Injection (NBI), it is possible to achieve the high temperatures needed for fusion reactions in plasmas, but also to drive current and provide torque. In order to study EPs, the orbit-following ASCOT Monte Carlo code is used. A good confinement of EPs is essential both for plasma performances and to avoid potentially harmful EP losses from confined plasma to the machine first wall. For this reason, EP modelling is used to predict their interaction with the plasma and to eventually set limitations for NBI use depending on plasma parameters. In particular, fast ion losses can be caused by particle orbits that cross the plasma boundary (orbit losses) or from injected neutral particles not ionized in the plasma (shine-through losses). After a brief introduction presenting the foreseen advantages of employing fusion energy and the relevant concepts of plasma physics for this thesis, the theory of fast ion confinement and orbits is reviewed. The case of the Divertor Tokamak Test, an experimental device in construction in Frascati (IT), is then analyzed, with a classification of possible EP orbits through a topological map in the phase space defined by EP constants of motion and adiabatic invariant of the system. EP orbit topologies are shown for different DTT plasmas and for different EP injection energies, giving already a grasp of expected EPs confinement and losses in the approximation of collisionless orbits. ASCOT modelling is then used to verify the different situations analyzed and to understand the role of EP collisions. ASCOT numerical results give also estimations of EP loss channels different from orbit losses, as for instance shine-through losses, showing their dependence on the plasma density as foreseen by theory. The theoretical study of EP orbits and numerical modelling of NBI-plasma interaction contribute to the understanding of predicted EP confinement and losses for the forthcoming DTT device, in order to allow for the most effective application of NBI in future DTT operations

    Efficient multilevel scheduling in grids and clouds with dynamic provisioning

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Inform谩tica, Departamento de Arquitectura de Computadores y Autom谩tica, le铆da el 12-01-2016La consolidaci贸n de las grandes infraestructuras para la Computaci贸n Distribuida ha resultado en una plataforma de Computaci贸n de Alta Productividad que est谩 lista para grandes cargas de trabajo. Los mejores exponentes de este proceso son las federaciones grid actuales. Por otro lado, la Computaci贸n Cloud promete ser m谩s flexible, utilizable, disponible y simple que la Computaci贸n Grid, cubriendo adem谩s muchas m谩s necesidades computacionales que las requeridas para llevar a cabo c谩lculos distribuidos. En cualquier caso, debido al dinamismo y la heterogeneidad presente en grids y clouds, encontrar la asignaci贸n ideal de las tareas computacionales en los recursos disponibles es, por definici贸n un problema NP-completo, y s贸lo se pueden encontrar soluciones sub贸ptimas para estos entornos. Sin embargo, la caracterizaci贸n de estos recursos en ambos tipos de infraestructuras es deficitaria. Los sistemas de informaci贸n disponibles no proporcionan datos fiables sobre el estado de los recursos, lo cual no permite la planificaci贸n avanzada que necesitan los diferentes tipos de aplicaciones distribuidas. Durante la 煤ltima d茅cada esta cuesti贸n no ha sido resuelta para la Computaci贸n Grid y las infraestructuras cloud establecidas recientemente presentan el mismo problema. En este marco, los planificadores (brokers) s贸lo pueden mejorar la productividad de las ejecuciones largas, pero no proporcionan ninguna estimaci贸n de su duraci贸n. La planificaci贸n compleja ha sido abordada tradicionalmente por otras herramientas como los gestores de flujos de trabajo, los auto-planificadores o los sistemas de gesti贸n de producci贸n pertenecientes a ciertas comunidades de investigaci贸n. Sin embargo, el bajo rendimiento obtenido con estos mecanismos de asignaci贸n anticipada (early-binding) es notorio. Adem谩s, la diversidad en los proveedores cloud, la falta de soporte de herramientas de planificaci贸n y de interfaces de programaci贸n estandarizadas para distribuir la carga de trabajo, dificultan la portabilidad masiva de aplicaciones legadas a los entornos cloud...The consolidation of large Distributed Computing infrastructures has resulted in a High-Throughput Computing platform that is ready for high loads, whose best proponents are the current grid federations. On the other hand, Cloud Computing promises to be more flexible, usable, available and simple than Grid Computing, covering also much more computational needs than the ones required to carry out distributed calculations. In any case, because of the dynamism and heterogeneity that are present in grids and clouds, calculating the best match between computational tasks and resources in an effectively characterised infrastructure is, by definition, an NP-complete problem, and only sub-optimal solutions (schedules) can be found for these environments. Nevertheless, the characterisation of the resources of both kinds of infrastructures is far from being achieved. The available information systems do not provide accurate data about the status of the resources that can allow the advanced scheduling required by the different needs of distributed applications. The issue was not solved during the last decade for grids and the cloud infrastructures recently established have the same problem. In this framework, brokers only can improve the throughput of very long calculations, but do not provide estimations of their duration. Complex scheduling was traditionally tackled by other tools such as workflow managers, self-schedulers and the production management systems of certain research communities. Nevertheless, the low performance achieved by these earlybinding methods is noticeable. Moreover, the diversity of cloud providers and mainly, their lack of standardised programming interfaces and brokering tools to distribute the workload, hinder the massive portability of legacy applications to cloud environments...Depto. de Arquitectura de Computadores y Autom谩ticaFac. de Inform谩ticaTRUEsubmitte
    corecore