14 research outputs found
Application of machine learning techniques at the CERN Large Hadron Collider
Machine learning techniques have been used extensively in several domains of Science and Engineering for decades. These powerful tools have been applied also to the domain of high-energy physics, in the analysis of the data from particle collisions, for years already. Accelerator physics, however, has not started exploiting machine learning until very recently. Several activities are flourishing in this domain, in view of providing new insights to beam dynamics in circular accelerators, in different laboratories worldwide. This is, for instance, the case for the CERN Large Hadron Collider, where since a few years exploratory studies are being carried out. A broad range of topics have been addressed, such as anomaly detection of beam position monitors, analysis of optimal correction tools for linear optics, optimisation of the collimation system, lifetime and performance optimisation, and detection of hidden correlations in the huge data set of beam dynamics observables collected during the LHC Run 2. Furthermore, very recently, machine learning techniques are being scrutinised for the advanced analysis of numerical simulations data, in view of improving our models of dynamic aperture evolution.peer-reviewe
Magnetization on rough ferromagnetic surfaces
Journal ArticleUsing Ising-model Monte Carlo simulations, we show a strong dependence of surface magnetization on surface roughness. On ferromagnetic surfaces with spin-exchange coupling larger than that of the bulk, the surface magnetic ordering temperature decreases toward the bulk Curie temperature with increasing roughness. For surfaces with spin-exchange coupling smaller than that of the bulk, a crossover behavior occurs: at low temperature, the surface magnetization decreases with increasing roughness; at high temperature, the reverse is true
Data-driven modeling of beam loss in the LHC
In the Large Hadron Collider, the beam losses are continuously measured for machine protection. By design, most of the particle losses occur in the collimation system, where the particles with high oscilla-tion amplitudes or large momentum error are scraped from the beams. The particle loss level is typically optimized manually by changing control parameters, among which are currents in the focusing and defocusing magnets. It is generally challenging to model and predict losses based only on the control parameters, due to the presence of various (non-linear) effects in the system, such as electron clouds, resonance effects, etc., and multiple sources of uncertainty. At the same time understanding the influence of control parameters on the losses is extremely important in order to improve the operation and performance, and future design of accelerators. Prior work [1] showed that modeling the losses as an instantaneous function of the control parameters does not generalize well to data from a different year, which is an indication that the leveraged statistical associations are not capturing the actual mechanisms which should be invariant from 1Â year to the next. Given that this is most likely due to lagged effects, we propose to model the losses as a function of not only instantaneous but also previously observed control parameters as well as previous loss values. Using a standard reparameterization, we reformulate the model as a Kalman Filter (KF) which allows for a flexible and efficient estimation procedure. We consider two main variants: one with a scalar loss output, and a second one with a 4D output with loss, horizontal and vertical emittances, and aggregated heatload as components. The two models once learned can be run for a number of steps in the future, and the second model can forecast the evolution of quantities that are relevant to predicting the loss itself. Our results show that the proposed models trained on the beam loss data from 2017 are able to predict the losses on a time horizon of several minutes for the data of 2018 as well and successfully identify both local and global trends in the losses.ISSN:2296-424
Machine learning applications for Hadron Colliders: LHC lifetime optimization
The Large Hadron Collider is a indescribably complicated system with numerous intertwined systems, each impacts in itâs own way the dynamics and stability of the protons. As such, building a model of the particle losses occurring withing the LHC is an extremely daunting task, but it would offer valuable insight into the inner workings of the machine, and could potentially be used to further optimize the working points of the systems. This project aims to characterize the beam losses during the ramp of energy and use Machine Learning to build models of the LHC capable of predicting the intensity lifetimes of the beams at injection energy given a set of input parameters, such as tunes, chromaticities, intensities, octupole currents and a number of other features. The main goal is to determine the optimal settings of these parameters which would help operators in the decision making process, when striving to improve the performance and physics of such colliders. Two input feature sets are defined, one encompasses as many lifetime relevant measurements as possible and allows us to train a model as information complete as possible. The other, more practical feature set, only covers parameters which the operators can control, the knobs of the machine. The models are trained on experimental data using a Gradient Boosted Decision Tree supervised learning algorithm. From the models and covariance calculations we are able to extract the most relevant features in terms of contribution to the beam lifetimes. These results would be useful in understanding the machine on a fill by fill basis and also shed light on the physics mechanisms behind the lifetime variations. To go further, a similar method will be used on simulation data. In parallel a numerical model has been developed to physically explain the results gleaned from the machine learning analysis. A similar method could be applied to train on a dataset produced by simulations. The aim is to then use this model instead of the extremely computationally expensive SixTrack simulations, thus allowing the gained CPU time to be spent elsewhere, leading to more thorough studies
MD 4510 : Working point exploration for use in lifetime optimization by machine learning
Supervised learning based Machine Learning models are fundamentally reliant on the data on which they are trained. Previous to this MD, the data available although plentiful, was lacking variety as the working point is rarely changed. We have a large amount of data, however, many of the beam and machine parameters are left unchanged during operation, and from ïŹll to ïŹll. Therefore, previous work done with this data at injection energy show promising results but restricted pre- diction power due to this lack of exploration. This MD will serve to generate a wider data training sample in a more exotic conïŹgurations, at injection energy. The goal is to explore the possibility to optimize the beam lifetime of the LHC by the use of machine learning algorithm. Previous Machine Learning studies have predicted some tentative trends which were conïŹrmed with this MD
Modeling Particle Stability Plots for Accelerator Optimization Using Adaptive Sampling
One key aspect of accelerator optimization is to maximize the dynamic aperture (DA) of a ring. Given the number of adjustable parameters and the compute-intensity of DA simulations, this task can benefit significantly from efficient search algorithms of the available parameter space. We propose to gradually train and improve a surrogate model of the DA from SixTrack simulations while exploring the parameter space with adaptive sampling methods. Here we report on a first model of the particle stability plots using convolutional generative adversarial networks (GAN) trained on a subset of SixTrack numerical simulations for different ring configurations of the Large Hadron Collider at CERN
A Novel Method for Detecting Unidentified Falling Object Loss Patterns in the LHC
Understanding and mitigating particle losses in the Large Hadron Collider (LHC) is essential for both machine safety and efficient operation. Abnormal loss distributions are tell- tale signs of abnormal beam behaviour or incorrect machine configuration. By leveraging the advancements made in the field of Machine Learning, a novel data-driven method of detecting anomalous loss distributions during machine operation has been developed. A neural network anomaly detection model was trained to detect Unidentified Falling Object events using stable beam, Beam Loss Monitor (BLM) data acquired during the operation of the LHC. Data-driven models, such as the one presented, could lead to significant improvements in the autonomous labelling of abnormal loss distributions, ultimately bolstering the ever ongoing effort toward improving the understanding and mitigation of these events.LPA
A Novel Method for Detecting Unidentified Falling Object Loss Patterns in the LHC
Understanding and mitigating particle losses in the Large Hadron Collider (LHC) is essential for both machine safety and efficient operation. Abnormal loss distributions are telltale signs of abnormal beam behaviour or incorrect machine configuration. By leveraging the advancements made in the field of Machine Learning, a novel data-driven method of detecting anomalous loss distributions during machine operation has been developed. A neural network anomaly detection model was trained to detect Unidentified Falling Object events using stable beam, Beam Loss Monitor (BLM) data acquired during the operation of the LHC. Data-driven models, such as the one presented, could lead to significant improvements in the autonomous labelling of abnormal loss distributions, ultimately bolstering the ever ongoing effort toward improving the understanding and mitigation of these events
Detection and Classification of Collective Beam Behaviour in the LHC
Collective instabilities can lead to a severe deterioration of beam quality, in terms of reduced beam intensity and increased beam emittance, and consequently a reduction of the colliderâs luminosity. It is therefore crucial for the operation of the CERNâs Large Hadron Collider to understand the conditions in which they appear in order to find appropriate mitigation measures. Using bunch-by-bunch and turn-by-turn beam amplitude data, courtesy of the transverse damperâs observation box (ObsBox), a novel machine learning based approach is developed to both detect and classify these instabilities. By training an autoencoder neural network on the ObsBox amplitude data and using the modelâs reconstruction error, instabilities and other phenomena are separated from nominal beam behaviour. Additionally, the latent space encoding of this autoencoder offers a unique image like representation of the beam amplitude signal. Leveraging this latent space representation allows us to cluster the various types of anomalous signals
Application of machine learning techniques at the CERN Large Hadron Collider
Machine learning techniques have been used extensively in several domains of Science and Engineering for decades. These powerful tools have been applied also to the domain of high-energy physics, in the analysis of the data from particle collisions, for years already. Accelerator physics, however, has not started exploiting machine learning until very recently. Several activities are flourishing in this domain, in view of providing new insights to beam dynamics in circular accelerators, in different laboratories worldwide. This is, for instance, the case for the CERN Large Hadron Collider, where since a few years exploratory studies are being carried out. A broad range of topics have been addressed, such as anomaly detection of beam position monitors, analysis of optimal correction tools for linear optics, optimisation of the collimation system, lifetime and performance optimisation, and detection of hidden correlations in the huge data set of beam dynamics observables collected during the LHC Run 2. Furthermore, very recently, machine learning techniques are being scrutinised for the advanced analysis of numerical simulations data, in view of improving our models of dynamic aperture evolution