1,760 research outputs found

    An adjustable focusing system for a 2 MeV H- ion beam line based on permanent magnet quadrupoles

    Get PDF
    A compact adjustable focusing system for a 2 MeV H- RFQ Linac is designed, constructed and tested based on four permanent magnet quadrupoles (PMQ). A PMQ model is realised using finite element simulations, providing an integrated field gradient of 2.35 T with a maximal field gradient of 57 T/m. A prototype is constructed and the magnetic field is measured, demonstrating good agreement with the simulation. Particle track simulations provide initial values for the quadrupole positions. Accordingly, four PMQs are constructed and assembled on the beam line, their positions are then tuned to obtain a minimal beam spot size of (1.2 x 2.2) mm^2 on target. This paper describes an adjustable PMQ beam line for an external ion beam. The novel compact design based on commercially available NdFeB magnets allows high flexibility for ion beam applications.Comment: published in JINST (4th Feb 2013

    Numerical Simulation of Multicomponent Ion Beam from Ion Sources

    Get PDF
    A program library for numerical simulation of a multicomponent charged particle beam from ion sources is presented. The library is aimed for simulation of high current, low energy multicomponent ion beam from ion source through beamline and realized under the Windows user interface for the IBM PC. It is used for simulation and optimization of beam dynamics and based on successive and consistent application of two methods: the momentum method of distribution function (RMS technique) and particle in cell method. The library has been used to simulate and optimize the transportation of tantalum ion beam from the laser ion source (CERN) and calcium ion beam from the ECR ion source (JINR, Dubna)

    COMPUTATIONAL SCIENCE CENTER

    Full text link

    Object-oriented simulation for the Superconducting Super Collider

    Full text link

    Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets

    Full text link
    The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyses voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE=0.00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer

    Mission critical database for SPS accelerator measurements

    Get PDF
    In order to maintain efficient control over the hadron and lepton beams in CERNÂąs SPS accelerator, measurements are of vital importance. Beam parameters such as intensities, positions and losses need to be rapidly available in the SPS control room to allow the operators to monitor, judge and act on beam physics conditions. For the 1994 SPS startup, a completely new and redesigned measurement system based on client and server C-programs running on UNIX-workstations was introduced. The kernel of this new measurement system is an on-line ORACLE database.The NIAM method was used for the database design as well as a technique to tag synchronized data with timeslots instead of timestamps. A great attention was paid to proper storage allocation for tables and indices since this has a major impact on the efficiency of the database, due to its time-critical nature. Many new features of Oracle7 were exploited to reduce the surrounding software.During the 1994 SPS physics run, this new measurement system was commissioned successfully and the infrastructure proved to be acceptably reliable. Hence, for the 1995 startup, the size of the measurement system was increased drastically to fulfill a variety of measurement needs. This proliferation of measurements beyond the initial scope showed the correct design of the system, as well as the performance limitations within the actual hardware configuration. This paper describes the overall design and discusses performance issues of this critical system

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    New Capabilities of the FLUKA Multi-Purpose Code

    Get PDF
    FLUKA is a general purpose Monte Carlo code able to describe the transport and interaction of any particle and nucleus type in complex geometries over an energy range extending from thermal neutrons to ultrarelativistic hadron collisions. It has many different applications in accelerator design, detector studies, dosimetry, radiation protection, medical physics, and space research. In 2019, CERN and INFN, as FLUKA copyright holders, together decided to end their formal collaboration framework, allowing them henceforth to pursue different pathways aimed at meeting the evolving requirements of the FLUKA user community, and at ensuring the long term sustainability of the code. To this end, CERN set up the FLUKA.CERN Collaboration1. This paper illustrates the physics processes that have been newly released or are currently implemented in the code distributed by the FLUKA.CERN Collaboration2 under new licensing conditions that are meant to further facilitate access to the code, as well as intercomparisons. The description of coherent effects experienced by high energy hadron beams in crystal devices, relevant to promising beam manipulation techniques, and the charged particle tracking in vacuum regions subject to an electric field, overcoming a former lack, have already been made available to the users. Other features, namely the different kinds of low energy deuteron interactions as well as the synchrotron radiation emission in the course of charged particle transport in vacuum regions subject to magnetic fields, are currently undergoing systematic testing and benchmarking prior to release. FLUKA is widely used to evaluate radiobiological effects, with the powerful support of the Flair graphical interface, whose new generation (Available at http://flair.cern) offers now additional capabilities, e.g., advanced 3D visualization with photorealistic rendering and support for industry-standard volume visualization of medical phantoms. FLUKA has also been playing an extensive role in the characterization of radiation environments in which electronics operate. In parallel, it has been used to evaluate the response of electronics to a variety of conditions not included in radiation testing guidelines and standards for space and accelerators, and not accessible through conventional ground level testing. Instructive results have been obtained from Single Event Effects (SEE) simulations and benchmarks, when possible, for various radiation types and energies. The code has reached a high level of maturity, from which the FLUKA.CERN Collaboration is planning a substantial evolution of its present architecture. Moving towards a modern programming language allows to overcome fundamental constraints that limited development options. Our long term goal, in addition to improving and extending its physics performances with even more rigorous scientific oversight, is to modernize its structure to integrate independent contributions more easily and to formalize quality assurance through state-of-the-art software deployment techniques. This includes a continuous integration pipeline to automatically validate the codebase as well as automatic processing and analysis of a tailored physics-case test suite. With regard to the aforementioned objectives, several paths are currently envisaged, like finding synergies with Geant4, both at the core structure and interface level, this way offering the user the possibility to run with the same input different Monte Carlo codes and crosscheck the results
    • …
    corecore