41 research outputs found

    Data-driven cross-talk modeling of beam losses in LHC collimators

    Get PDF
    The Large Hadron Collider at CERN is equipped with a collimation system to intercept potentially dangerous beam halo particles before they damage its sensitive equipment. The collimator settings are determined following a beam-based alignment procedure, in which the collimator jaws are moved towards the beam until losses appear in the beam loss monitors. When the collimator reaches the beam envelope, beam losses propagate mainly in the direction of the beam and are, therefore, also observed by other nearby beam loss monitors. This phenomenon is known as cross talk. Due to this, collimators are aligned sequentially to be able to identify which losses are generated by which collimator, such that any cross talk across beam loss monitors positioned close to each other is avoided. This paper seeks to quantify the levels of cross-talk observed by beam loss monitors when multiple collimators are moving, to be able to determine the actual beam loss signals generated by their corresponding collimators. The results obtained successfully predicted the amount of cross-talk observed for each of the cases tested in this study. This was then extended to predict loss map case studies and the proton impacts at each collimator by comparing them to simulations.peer-reviewe

    Operational results on the fully automatic LHC collimator alignment

    Get PDF
    The Large Hadron Collider has a complex collimation system installed to protect its sensitive equipment from normal and abnormal beam losses. The collimators are set around the beam following a multistage transverse setting hierarchy. The insertion position of each collimator is established using beam-based alignment techniques to determine the local beam position and rms beam size at each collimator. During previous years, collimator alignments were performed semiautomatically, with collimation experts present to oversee and control the alignment. During run II, a new fully automatic alignment tool was developed and used for collimator alignments throughout 2018. This paper discusses the improvements on the alignment software to automate it using machine learning, whilst focusing on the operational results obtained when testing the new software in the LHC. The alignment tests were conducted with both proton and ion beams, and angular alignments were performed with proton beams. This upgraded software successfully decreased the alignment time by a factor of 3 and made the results more reproducible, which is particularly important when performing angular alignments.peer-reviewe

    Software architecture for automatic LHC collimator alignment using machine learning

    Get PDF
    The Large Hadron Collider at CERN relies on a collimation system to absorb unavoidable beam losses before they reach the superconducting magnets. The collimators are positioned close to the beam in a transverse setting hierarchy achieved by aligning each collimator with a precision of a few tens of micrometres. In previous years, collimator alignments were performed semi-automatically, requiring collimation experts to be present to oversee and control the entire process. In 2018, expert control of the alignment procedure was replaced by dedicated machine learning algorithms, and this new software was used for collimator alignments throughout the year. This paper gives an overview of the software re-design required to achieve fully automatic collimator alignments, describing in detail the software architecture and controls systems involved. Following this successful deployment, this software will be used in the future as the default alignment software for the LHC.peer-reviewe

    An off-momentum beam loss feedback controller and graphical user interface for the LHC

    Get PDF
    During LHC operation, a campaign to validate the configuration of the LHC collimation system is conducted every few months. This is performed by means of loss maps, where specific beam losses are voluntarily generated with the resulting loss patterns compared to expectations. The LHC collimators have to protect the machine from both betatron and off-momentum losses. In order to validate the off-momentum protection, beam losses are generated by shifting the RF frequency using a low intensity beam. This is a delicate process that, in the past, often led to the beam being dumped due to excessive losses. To avoid this, a feed-back system based on the100 Hz data stream from the LHC Beam Loss system has been implemented. When given a target RF frequency, the feedback system approaches this frequency in steps while monitoring the losses until the selected loss pattern conditions are reached, so avoiding the excessive losses that lead to a beam dump. This paper will describe the LHC off-momentum beam loss feedback system and the results achieved.peer-reviewe

    Automatic beam loss threshold selection for LHC collimator alignment

    Get PDF
    The collimation system used in the Large Hadron Collider at CERN is positioned around the beam with a hierarchy that protects sensitive equipment from unavoidable beam losses. The collimator settings are determined using a beam-based alignment technique, where collimator jaws are moved towards the beam until the beam losses exceed a predefined threshold. This threshold needs to be updated dynamically, corresponding to the changes in the beam losses. The current method for aligning collimators is semi-automated requiring a collimation expert to monitor the loss signals and continuously select and update the threshold accordingly. The human element in this procedure is a major bottleneck for speeding up the alignment. This paper therefore proposes a method to fully automate this threshold selection. A data set was formed from previous alignment campaigns and analysed to define an algorithm that produced results consistent with the user selections. In over 90% of the cases the difference between the two was negligible and the algorithm presented in this study was used for collimator alignments throughout 2018.peer-reviewe

    Prospects to apply machine learning to optimize the operation of the crystal collimation system at the LHC

    Get PDF
    Crystal collimation relies on the use of bent crystals to coherently deflect halo particles onto dedicated collimator absorbers. This scheme is planned to be used at the LHC to improve the betatron cleaning efficiency with high-intensity ion beams. Only particles with impinging angles below 2.5 urad relative to the crystalline planes can be efficiently channeled at the LHC nominal top energy of 7 Z TeV. For this reason, crystals must be kept in optimal alignment with respect to the circulating beam envelope to maximize the efficiency of the channeling process. Given the small angular acceptance, achieving optimal channeling conditions is particularly challenging. Furthermore, the different phases of the LHC operational cycle involve important dynamic changes of the local orbit and optics, requiring an optimized control of position and angle of the crystals relative to the beam. To this end, the possibility to apply machine learning to the alignment of the crystals, in a dedicated setup and in standard operation, is considered. In this paper, possible solutions for automatic adaptation to the changing beam parameters are highlighted and plans for the LHC ion runs starting in 2022 are discussed.peer-reviewe

    Application of machine learning techniques at the CERN Large Hadron Collider

    Get PDF
    Machine learning techniques have been used extensively in several domains of Science and Engineering for decades. These powerful tools have been applied also to the domain of high-energy physics, in the analysis of the data from particle collisions, for years already. Accelerator physics, however, has not started exploiting machine learning until very recently. Several activities are flourishing in this domain, in view of providing new insights to beam dynamics in circular accelerators, in different laboratories worldwide. This is, for instance, the case for the CERN Large Hadron Collider, where since a few years exploratory studies are being carried out. A broad range of topics have been addressed, such as anomaly detection of beam position monitors, analysis of optimal correction tools for linear optics, optimisation of the collimation system, lifetime and performance optimisation, and detection of hidden correlations in the huge data set of beam dynamics observables collected during the LHC Run 2. Furthermore, very recently, machine learning techniques are being scrutinised for the advanced analysis of numerical simulations data, in view of improving our models of dynamic aperture evolution.peer-reviewe

    Magnetization on rough ferromagnetic surfaces

    Get PDF
    Journal ArticleUsing Ising-model Monte Carlo simulations, we show a strong dependence of surface magnetization on surface roughness. On ferromagnetic surfaces with spin-exchange coupling larger than that of the bulk, the surface magnetic ordering temperature decreases toward the bulk Curie temperature with increasing roughness. For surfaces with spin-exchange coupling smaller than that of the bulk, a crossover behavior occurs: at low temperature, the surface magnetization decreases with increasing roughness; at high temperature, the reverse is true

    Automation of the LHC Collimator Beam-Based Alignment Procedure for Nominal Operation

    No full text
    The CERN Large Hadron Collider (LHC) is the largest particle accelerator in the world, built to accelerate and collide two counter-rotating beams. The LHC is susceptible to unavoidable beam losses, therefore a complex collimation system, made up of around 100 collimators, is installed in the LHC to protect its super-conducting magnets and sensitive equipment. The collimators are positioned around the beam following a multi-stage hierarchy. These settings are calculated following a beam-based alignment (BBA) technique, to determine the local beam position and beam size at each collimator. This procedure is currently semi-automated such that a collimation expert must continuously analyse the signal from the Beam Loss Monitoring (BLM) device positioned downstream of the collimator. Additionally, angular alignment are carried out to determine the most optimal angle for enhanced performance. The human element, in both the standard and angular BBA, is a major bottleneck in speeding up the alignment. This limits the frequency at which alignments can be performed to the bare minimum, therefore this dissertation seeks to improve the process by fully-automating the BBA. This work proposes to automate the human task of spike detection by using machine learning models. A data set was collated from previous alignment campaigns and fourteen manually engineered features were extracted. Six machine learning models were trained, analysed in-depth and thoroughly tested, obtaining a precision of over 95%. To automate the threshold selection task, data from previous alignment campaigns was analysed to define an algorithm to execute in real-time, as the threshold needs to be updated dynamically, corresponding to the changes in the beam losses. The thresholds selected by the algorithm were consistent with the user selections whereby all automatically selected thresholds were suitable selections. Finally, this work seeks to identify the losses generated by each collimator, such that any cross-talk across BLM devices is avoided. This involves building a cross-talk model to automate the parallel selection of collimators, and seeks to determine the actual beam loss signals generated by their corresponding collimators. Manual, expert control of the alignment procedure was replaced by these dedicated algorithms, such that the software was re-designed to achieve fully-automatic collimator alignments. This software is developed in a real-time environment, such that the fully-automatic BBA is implemented on top of the semi-automatic BBA, thus allowing for both alignment tools to be available together and maintaining backward-compatibility with all previous functionality. This new software was used for collimator alignments in 2018, for both standard and angular alignments. Automatically aligning the collimators decreased the alignment time by 70%, whilst maintaining the accuracy of the results. The work described in this dissertation was successfully adopted by CERN for LHC operation in 2018, and will continue to be used in the future as the default collimator alignment software for the LHC

    New Machine Learning Model Application for the Automatic LHC Collimator Beam-Based Alignment

    No full text
    A collimation system is installed in the Large Hadron Collider (LHC) to protect its sensitive equipment from unavoidable beam losses. An alignment procedure determines the settings of each collimator, by moving the collimator jaws towards the beam until a characteristic loss pattern, consisting of a sharp rise followed by a slow decay, is observed in downstream beam loss monitors. This indicates that the collimator jaw intercepted the reference beam halo and is thus aligned to the beam. The latest alignment software introduced in 2018 relies on supervised machine learning (ML) to detect such spike patterns in real-time*. This enables the automatic alignment of the collimators with a significant reduction in the alignment time**. This paper analyses the first-use performance of this new software focusing on solutions to the identified bottleneck caused by waiting a fixed duration of time when detecting spikes. It is proposed to replace the supervised ML model with a Long-Short Term Memory model able to detect spikes in time windows of varying lengths, waiting for a variable duration of time determined by the spike itself. This will allow to further speed up the automatic alignment
    corecore