68 research outputs found

    Handling 1 MW losses with the LHC collimation system

    Get PDF
    The LHC superconducting magnets in the dispersion suppressor of IR7 are the most exposed to beam losses leaking from the betatron collimation system and represent the main limitation for the halo cleaning. In 2013, quench tests were performed at 4 TeV to improve the quench limit estimates, which determine the maximum allowed beam loss rate for a given collimation cleaning. The main goal of the collimation quench test was to try to quench the magnets by increasing losses at the collimators. Losses of up to 1 MW over a few seconds were generated by blowing up the beam, achieving total losses of about 5.8 MJ. These controlled losses exceeded by a factor 2 the collimation design value, and the magnets did not quench.peer-reviewe

    Quench tests at the Large Hadron Collider with collimation losses at 3.5 Z TeV

    Get PDF
    The Large Hadron Collider (LHC) has been operating since 2010 at 3.5 TeV and 4.0 TeV without experiencing quenches induced by losses from circulating beams. This situation might change at 7 TeV where the quench margins in the super-conducting magnets are reduced. The critical locations are the dispersion suppressors (DSs) at either side of the cleaning and experimental insertions, where dispersive losses are maximum. It is therefore crucial to understand the quench limits with beam loss distributions alike those occurring in standard operation. In order to address this aspect, quench tests were performed by inducing large beam losses on the primary collimators of the betatron cleaning insertion, for proton and lead ion beams of 3.5 Z TeV, to probe the quench limits of the DS magnets. Losses up to 500 kW were achieved without quenches. The measurement technique and the results obtained are presented, with observations of heat loads in the cryogenics system.peer-reviewe

    Collimation for the LHC high intensity beams

    Get PDF
    The unprecedented design intensities of the LHC require several important advances in beam collimation. With its more than 100 collimators, acting on various planes and beams, the LHC collimation system is the biggest and most performing such system ever designed and constructed. The solution for LHC collimation is explained, the technical components are introduced and the initial performance is presented. Residual beam leakage from the system is analysed. Measurements and simulations are presented which show that collimation efficiencies of better than 99.97 % have been measured with the 3.5 TeV proton beams of the LHC, in excellent agreement with expectations.peer-reviewe

    Measuring Coalescence Radii And Flow Using Identified Protons And Deuterons From Na44

    No full text
    4.44> \Deltap, is \Sigma20% of the nominal momentum setting. Tracks through the spectrometer are reconstructed with 3 wire chambers with pad and strip cathode readout (oe Ăź 200ÂŻm) and 3 hodoscopes with an average time of flight resolution of 100 ps. Two threshold Cherenkov counters are used to veto electrons and pions. This analysis compares deuterons from the 8 GeV/c momentum setting to protons from the 4 GeV/c setting at the same velocity. The rapidity range of the data is 1:9 y 2:3. The centrality trigger selects 8.7, 10.7 and 27% of the events with the highest multiplicity for S+S, S+Pb and Pb+Pb collisions, respectively. No further selection on higher centrality was performed. The beam was provided by the CERN SPS accelerator at an energy of 200 A GeV for the sulphur and 158 A GeV for the lead beam. The contamination of

    Beam Diagnostic Challenges for High Energy Hadron Colliders

    No full text
    Two high energy hadron colliders are currently in the operational phase of their life-cycle, RHIC and LHC. A major upgrade of the LHC, HL-LHC, planned for 2023 aims at accumulating ten times the design integrated luminosity by 2035. Still further in the future, studies of SppC and FCC are investigating machines with a center-of-mass energy of up to 100 TeV and up to 100 km circumference. The existing machines pose considerable diagnostic challenges, which will become even more critical with any increase in size and energy. Cryogenic environments lead to additional difficulties for diagnostics and further limit the applicability of intercepting devices, making non-invasive profile and halo measurements essential. The sheer size of these colliders requires the use of radiation tolerant read-out electronics in the tunnel and low noise, low loss signal transmission. It also implies a very large number of beam position and loss monitors, all of which have to be highly reliable. To fully understand the machine and tackle beam instabilities bunch-by-bunch measurements become increasingly important for all diagnostic systems. This contribution discusses current developments in the field

    Beam Loss Monitoring for Demanding Environments

    No full text
    Beam loss monitoring (BLM) is a key protection system for machines using beams with damage potential and is an essential beam diagnostic tool for any machine. All BLM systems are based on the observation of secondary particle showers originating from escaping beam particles. With ever higher beam energies and intensities, the loss of even a tiny fraction of the beam can lead to damage or, in the case of superconducting machines, quenches. Losses also lead to material ageing and activation and should therefore be well controlled and reduced to a minimum. The ideal BLM system would have full machine coverage and the capability to accurately quantify the number of lost beam particles from the measured secondary shower. Position and time resolution, dynamic range, noise levels and radiation hardness all have to be considered, while at the same time optimising the system for reliability, availability and maintainability. This contribution will focus on design choices for BLM systems operating in demanding environments, with a special emphasis on measuring particle losses in the presence of synchrotron radiation and other background sources

    Conférence finale 2019 - DPS

    No full text
    Magdalena Kowalska et Eva Barbara Holze

    Identification des mécanismes de perte de faisceau au LHC (un traitement déterministe des structures de pertes)

    No full text
    Le Large Hadron Collider (LHC) du CERN, avec un périmètre de 26,7 km, est la plus grande machine jamais construite et l'accélérateur de particules le plus puissant, à la fois par l'énergie des faisceaux et par leur intensité. Les aimants principaux sont supraconducteurs, et maintiennent les particules en deux faisceaux circulants à contre-sens, qui entre en collision en quatre points d'interaction différents. Ces aimants doivent être protégés contre les pertes de faisceau : ils peuvent subir une transition de phase et redevenir résistifs, et peuvent être endommagés. Pour éviter cela, des moniteurs de pertes de faisceau, appelés Beam Loss Monitors (BLM) ont été installés. Si les seuils de pertes maximum autorisées sont dépassés, les faisceaux sont rapidement enlevés de la machine. Les détecteurs du système BLM sont en majorité des chambres d'ionisation situées à l'extérieur des cryostats. Au total, plus de 3500 chambres d'ionisation sont installées. Les difficultés supplémentaires comprennent la grande amplitude dynamique des pertes : les courants mesurés s'échelonnent de 2 pA jusqu'à 1 mA. Le sujet de cette thèse est d'étudier les structures de pertes et de trouver l'origine des pertes de façon déterministe, en comparant des profils de pertes mesurés à des scénarios de pertes connus. Ceci a été effectué par le biais d'une étude de cas : différentes techniques ont été utilisées sur un ensemble restreint de scénarios de pertes, constituant une preuve de concept de la possibilité d'extraire de l'information d'un profil de pertes. Trouver l'origine des pertes doit pouvoir permettre d'agir en conséquence, ce qui justifie l'intérêt du travail doctoral. Ce travail de thèse se focalise sur la compréhension de la théorie et la mise en place de la décomposition d'un profil de pertes mesuré en une combinaison linéaire des scénarios de référence; sur l'évaluation de l'erreur sur la recomposition et sa validité. Un ensemble de scénarios de pertes connus (e.g. l'augmentation de la taille du faisceau dans les plans vertical et horizontal ou la différence d'énergie lors de mesures de profils de pertes) ont été réunis, permettant l'étude et la création de vecteurs de référence. Une technique d'algèbre linéaire (inversion de matrice), l'algorithme numérique de décomposition en valeurs singulières (SVD), a été utilisé pour effectuer cette décomposition vectorielle. En outre, un code spécifique a été développé pour la projection vectorielle sur une base non-orthogonale d'un sous-espace vectoriel. Ceci a été mis en place avec les données du LHC. Ensuite, les outils de décomposition vectorielle ont été systématiquement utilisés pour étudier l'évolution temporelle des pertes : d'abord par la variation d'une seconde à l'autre, puis par différentes comparaisons avec un profil de pertes par défaut calculé pour l'occasion. Puis, les résultats des décompositions ont été calculés pour les pertes à chaque seconde des périodes de ''faisceaux stables'' de l'année 2011, pour l'étude de la distribution spatiale des pertes. Les résultats obtenus ont été comparés avec les mesures d'autres instruments de faisceau, pour permettre différentes validations. En conclusion, l'intérêt de la décomposition vectorielle est présenté. Ensuite, l'annexe A, décrit le code développé pour permettre l accès aux données des BLMs, pour les représenter de façon utilisable, et pour les enregistrer. Ceci inclus la connexion à différentes bases de données. L'instrument utilise des objets ROOT pour envoyer des requêtes SQL aux bases de données ainsi que par une interface Java, et est codé en Python. Un court glossaire des acronymes utilisés dans cette thèse est disponible à la fin du document, avant la bibliographie.CERN's Large Hadron Collider (LHC) is the largest machine ever built, with a total circumference of 26.7 km; and it is the most powerful accelerator ever, both in beam energy and beam intensity. The main magnets are superconducting, keeping the particles into two counter circulating beams, which collide in four interaction points. CERN and the LHC will be described in chap. 1. The superconducting magnets of the LHC have to be protected against particle losses. Depending on the number of lost particles, the coils of the magnets will become normal conducting and/or will be damaged. To avoid these events a beam loss monitoring (BLM) system was installed to measure the particle loss rates. If the predefined safe thresholds of loss rates are exceeded, the beams are directed out of the accelerator ring towards the beam dump. The detectors of the BLM system are mainly ionization chambers located outside of the cryostats. In total, about 3500 ionisation chambers are installed. Further challenges include the high dynamical range of losses (chamber currents ranging between 2 pA and 1 mA). The BLM system will be further described in chap. 2. The subject of this thesis is to study the loss patterns and find the origin of the losses in a deterministic way, by comparing measured losses to well understood loss scenarios. This is done through a case study: different techniques were used on a restrained set of loss scenarios, as a proof of concept of the possibility to extract information from a loss profile. Finding the origin of the losses should allow acting in response. A justification of the doctoral work will be given at the end of chap. 2. Then, this thesis will focus on the theoretical understanding and the implementation of the decomposition of a measured loss profile as a linear combination of the reference scenarios; and the evaluation of the error on the recomposition and its correctness. The principles of vector decomposition are developed in chap. 3. An ensemble of well controlled loss scenarios (such as vertical and horizontal blow-up of the beams or momentum offset during collimator loss maps) has been gathered, in order to allow the study and creation of reference vectors. To achieve the Vector Decomposition, linear algebra (matrix inversion) is used with the numeric algorithm for the Singular Value Decomposition. Additionally, a specific code for vector projection on a non-orthogonal basis of a hyperplane was developed. The implementation of the vector decomposition on the LHC data is described in chap. 4. After this, the use of the decomposition tools systematically on the time evolution of the losses will be described: first as a study of the variations second by second, then by comparison to a calculated default loss profile. The different ways to evaluate the variation are studied, and are presented in chap. 5. The next chapter (6) describes the gathering of decomposition results applied to beam losses of 2011. The vector decomposition is applied on every second of the stable beans'' periods, as a study of the spatial distribution of the loss. Several comparisons of the results given by the decompositions with measurements from other LHC instruments allowed different validations. Eventually, a global conclusion on the interest of the vector decomposition is given. Then, the extra chapter in Appendix A describes the code which was developed to access the BLM data, to represent them in a meaningful way, and to store them. This included connecting to different databases. The whole instrument uses ROOT objects to send SQL queries to the databases, as well as java API, and is coded in Python. A short glossary of the acronyms used here can be found at the end, before the bibliography.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF
    • …
    corecore