349 research outputs found
ATLAS Upgrade Instrumentation in the US
Planned upgrades of the LHC over the next decade should allow the machine to
operate at a center of mass energy of 14 TeV with instantaneous luminosities in
the range 5--7e34 cm^-2 s^-1. With these parameters, ATLAS could collect 3,000
fb^-1 of data in approximately 10 years. However, the conditions under which
this data would be acquired are much harsher than those currently encountered
at the LHC. For example, the number of proton-proton interactions per bunch
crossing will rise from the level of 20--30 per 50 ns crossing observed in 2012
to 140--200 every 25 ns. In order to deepen our understanding of the newly
discovered Higgs boson and to extend our searches for physics beyond that new
particle, the ATLAS detector, trigger, and readout will have to undergo
significant upgrades. In this whitepaper we describe R&D necessary for ATLAS to
continue to run effectively at the highest luminosities foreseen from the LHC.
Emphasis is placed on those R&D efforts in which US institutions are playing a
leading role.Comment: Snowmass contributed paper, 24 pages, 12 figure
On the use of heterogenous computing in high-energy particle physics at the ATLAS detector
A dissertation submitted in fulfillment of the requirements
for the degree of Master of Physics
in the
School of Physics
November 1, 2017.The ATLAS detector at the Large Hadron Collider (LHC) at CERN is
undergoing upgrades to its instrumentation, as well as the hardware and
software that comprise its Trigger and Data Acquisition (TDAQ) system.
The increased energy will yield larger cross sections for interesting physics
processes, but will also lead to increased artifacts in on-line reconstruction
in the trigger, as well as increased trigger rates, beyond the current system’s
capabilities. To meet these demands it is likely that the massive parallelism
of General-Purpose Programming with Graphic Processing Units (GPGPU)
will be utilised. This dissertation addresses the problem of integrating GPGPU
into the existing Trigger and TDAQ platforms; detailing and analysing
GPGPU performance in the context of performing in a high-throughput,
on-line environment like ATLAS. Preliminary tests show low to moderate
speed-up with GPU relative to CPU, indicating that to achieve a more significant
performance increase it may be necessary to alter the current platform
beyond pairing suitable GPUs to CPUs in an optimum ratio. Possible
solutions are proposed and recommendations for future work are given.LG201
Charged Particle Tracking in Real-Time Using a Full-Mesh Data Delivery Architecture and Associative Memory Techniques
We present a flexible and scalable approach to address the challenges of
charged particle track reconstruction in real-time event filters (Level-1
triggers) in collider physics experiments. The method described here is based
on a full-mesh architecture for data distribution and relies on the Associative
Memory approach to implement a pattern recognition algorithm that quickly
identifies and organizes hits associated to trajectories of particles
originating from particle collisions. We describe a successful implementation
of a demonstration system composed of several innovative hardware and
algorithmic elements. The implementation of a full-size system relies on the
assumption that an Associative Memory device with the sufficient pattern
density becomes available in the future, either through a dedicated ASIC or a
modern FPGA. We demonstrate excellent performance in terms of track
reconstruction efficiency, purity, momentum resolution, and processing time
measured with data from a simulated LHC-like tracking detector
Systems and algorithms for low-latency event reconsturction for upgrades of the level-1 triger of the CMS experiment at CERN
With the increasing centre-of-mass energy and luminosity of the Large Hadron Collider
(LHC), the Compact Muon Experiment (CMS) is undertaking upgrades to its triggering system
in order to maintain its data-taking efficiency. In 2016, the Phase-1 upgrade to the CMS Level-
1 Trigger (L1T) was commissioned which required the development of tools for validation of
changes to the trigger algorithm firmware and for ongoing monitoring of the trigger system
during data-taking. A Phase-2 upgrade to the CMS L1T is currently underway, in preparation
for the High-Luminosity upgrade of the LHC (HL-LHC). The HL-LHC environment is expected
to be particularly challenging for the CMS L1T due to the increased number of simultaneous
interactions per bunch crossing, known as pileup. In order to mitigate the effect of pileup, the
CMS Phase-2 Outer Tracker is being upgraded with capabilities which will allow it to provide
tracks to the L1T for the first time.
A key to mitigating pileup is the ability to identify the location and decay products of the signal
vertex in each event. For this purpose, two conventional algorithms have been investigated, with
a baseline being proposed and demonstrated in FPGA hardware. To extend and complement the
baseline vertexing algorithm, Machine Learning techniques were used to evaluate how different
track parameters can be included in the vertex reconstruction process. This work culminated
in the creation of a deep convolutional neural network, capable of both position reconstruction
and association through the intermediate storage of tracks into a z histogram where the optimal
weighting of each track can be learned. The position reconstruction part of this end-to-end model
was implemented and when compared to the baseline algorithm, a 30% improvement on the
vertex position resolution in tt̄ events was observed.Open Acces
A FPGA-based architecture for real-time cluster finding in the LHCb silicon pixel detector
The data acquisition system of the LHCb experiment has been substantially
upgraded for the LHC Run 3, with the unprecedented capability of reading out
and fully reconstructing all proton–proton collisions in real time, occurring
with an average rate of 30 MHz, for a total data flow of approximately
32 Tb/s. The high demand of computing power required by this task has
motivated a transition to a hybrid heterogeneous computing architecture,
where a farm of graphics cores, GPUs, is used in addition to general–purpose
processors, CPUs, to speed up the execution of reconstruction algorithms. In
a continuing effort to improve real–time processing capabilities of this new
DAQ system, also with a view to further luminosity increases in the future,
low–level, highly–parallelizable tasks are increasingly being addressed at the
earliest stages of the data acquisition chain, using special–purpose computing
accelerators. A promising solution is offered by custom–programmable FPGA
devices, that are well suited to perform high–volume computations with
high throughput and degree of parallelism, limited power consumption and
latency. In this context, a two–dimensional FPGA–friendly cluster–finder
algorithm has been developed to reconstruct hit positions in the new vertex
pixel detector (VELO) of the LHCb Upgrade experiment. The associated
firmware architecture, implemented in VHDL language, has been integrated
within the VELO readout, without the need for extra cards, as a further
enhancement of the DAQ system. This pre–processing allows the first level
of the software trigger to accept a 11% higher rate of events, as the ready–
made hit coordinates accelerate the track reconstruction, while leading to a
drop in electrical power consumption, as the FPGA implementation requires
O(50x) less power than the GPU one. The tracking performance of this novel
system, being indistinguishable from a full–fledged software implementation,
allows the raw pixel data to be dropped immediately at the readout level,
yielding the additional benefit of a 14% reduction in data flow. The clustering
architecture has been commissioned during the start of LHCb Run 3 and it
currently runs in real time during physics data taking, reconstructing VELO
hit coordinates on–the–fly at the LHC collision rate
Simulation Studies of Digital Filters for the Phase-II Upgrade of the Liquid-Argon Calorimeters of the ATLAS Detector at the High-Luminosity LHC
Am Large Hadron Collider und am ATLAS-Detektor werden umfangreiche Aufrüstungsarbeiten vorgenommen. Diese Arbeiten sind in mehrere Phasen gegliedert und umfassen unter Anderem Änderungen an der Ausleseelektronik der Flüssigargonkalorimeter; insbesondere ist es geplant, während der letzten Phase ihren Primärpfad vollständig auszutauschen. Die Elektronik besteht aus einem analogen und einem digitalen Teil: während ersterer die Signalpulse verstärkt und sie zur leichteren Abtastung verformt, führt letzterer einen Algorithmus zur Energierekonstruktion aus. Beide Teile müssen während der Aufrüstung verbessert werden, damit der Detektor interessante Kollisionsereignisse präzise rekonstruieren und uninteressante effizient verwerfen kann.
In dieser Dissertation werden Simulationsstudien präsentiert, die sowohl die analoge als auch die digitale Auslese der Flüssigargonkalorimeter optimieren. Die Korrektheit der Simulation wird mithilfe von Kalibrationsdaten geprüft, die im sog. Run 2 des ATLAS-Detektors aufgenommen worden sind. Der Einfluss verschiedener Parameter der Signalverformung auf die Energieauflösung wird analysiert und die Nützlichkeit einer erhöhten Abtastrate von 80 MHz untersucht. Des Weiteren gibt diese Arbeit eine Übersicht über lineare und nichtlineare Energierekonstruktionsalgorithmen. Schließlich wird eine Auswahl von ihnen hinsichtlich ihrer Leistungsfähigkeit miteinander verglichen.
Es wird gezeigt, dass ein Erhöhen der Ordnung des Optimalfilters, der gegenwärtig verwendete Algorithmus, die Energieauflösung um 2 bis 3 % verbessern kann, und zwar in allen Regionen des Detektors. Der Wiener Filter mit Vorwärtskorrektur, ein nichtlinearer Algorithmus, verbessert sie um bis zu 10 % in einigen Regionen, verschlechtert sie aber in anderen. Ein Zusammenhang dieses Verhaltens mit der Wahrscheinlichkeit fälschlich detektierter Kalorimetertreffer wird aufgezeigt und mögliche Lösungen werden diskutiert.:1 Introduction
2 An Overview of High-Energy Particle Physics
2.1 The Standard Model of Particle Physics
2.2 Verification of the Standard Model
2.3 Beyond the Standard Model
3 LHC, ATLAS, and the Liquid-Argon Calorimeters
3.1 The Large Hadron Collider
3.2 The ATLAS Detector
3.3 The ATLAS Liquid-Argon Calorimeters
4 Upgrades to the ATLAS Liquid-Argon Calorimeters
4.1 Physics Goals
4.2 Phase-I Upgrade
4.3 Phase-II Upgrade
5 Noise Suppression With Digital Filters
5.1 Terminology
5.2 Digital Filters
5.3 Wiener Filter
5.4 Matched Wiener Filter
5.5 Matched Wiener Filter Without Bias
5.6 Timing Reconstruction, Optimal Filtering, and Selection Criteria
5.7 Forward Correction
5.8 Sparse Signal Restoration
5.9 Artificial Neural Networks
6 Simulation of the ATLAS Liquid-Argon Calorimeter Readout Electronics
6.1 AREUS
6.2 Hit Generation and Sampling
6.3 Pulse Shapes
6.4 Thermal Noise
6.5 Quantization
6.6 Digital Filters
6.7 Statistical Analysis
7 Results of the Readout Electronics Simulation Studies
7.1 Statistical Treatment
7.2 Simulation Verification Using Run-2 Data
7.3 Dependence of the Noise on the Shaping Time
7.4 The Analog Readout Electronics and the ADC
7.5 The Optimal Filter (OF)
7.6 The Wiener Filter
7.7 The Wiener Filter with Forward Correction (WFFC)
7.8 Final Comparison and Conclusions
8 Conclusions and Outlook
AppendicesThe Large Hadron Collider and the ATLAS detector are undergoing a comprehensive upgrade split into multiple phases. This effort also affects the liquid-argon calorimeters, whose main readout electronics will be replaced completely during the final phase. The electronics consist of an analog and a digital portion: the former amplifies the signal and shapes it to facilitate sampling, the latter executes an energy reconstruction algorithm. Both must be improved during the upgrade so that the detector may accurately reconstruct interesting collision events and efficiently suppress uninteresting ones.
In this thesis, simulation studies are presented that optimize both the analog and the digital readout of the liquid-argon calorimeters. The simulation is verified using calibration data that has been measured during Run 2 of the ATLAS detector. The influence of several parameters of the analog shaping stage on the energy resolution is analyzed and the utility of an increased signal sampling rate of 80 MHz is investigated. Furthermore, a number of linear and non-linear energy reconstruction algorithms is reviewed and the performance of a selection of them is compared.
It is demonstrated that increasing the order of the Optimal Filter, the algorithm currently in use, improves energy resolution by 2 to 3 % in all detector regions. The Wiener filter with forward correction, a non-linear algorithm, gives an improvement of up to 10 % in some regions, but degrades the resolution in others. A link between this behavior and the probability of falsely detected calorimeter hits is shown and possible solutions are discussed.:1 Introduction
2 An Overview of High-Energy Particle Physics
2.1 The Standard Model of Particle Physics
2.2 Verification of the Standard Model
2.3 Beyond the Standard Model
3 LHC, ATLAS, and the Liquid-Argon Calorimeters
3.1 The Large Hadron Collider
3.2 The ATLAS Detector
3.3 The ATLAS Liquid-Argon Calorimeters
4 Upgrades to the ATLAS Liquid-Argon Calorimeters
4.1 Physics Goals
4.2 Phase-I Upgrade
4.3 Phase-II Upgrade
5 Noise Suppression With Digital Filters
5.1 Terminology
5.2 Digital Filters
5.3 Wiener Filter
5.4 Matched Wiener Filter
5.5 Matched Wiener Filter Without Bias
5.6 Timing Reconstruction, Optimal Filtering, and Selection Criteria
5.7 Forward Correction
5.8 Sparse Signal Restoration
5.9 Artificial Neural Networks
6 Simulation of the ATLAS Liquid-Argon Calorimeter Readout Electronics
6.1 AREUS
6.2 Hit Generation and Sampling
6.3 Pulse Shapes
6.4 Thermal Noise
6.5 Quantization
6.6 Digital Filters
6.7 Statistical Analysis
7 Results of the Readout Electronics Simulation Studies
7.1 Statistical Treatment
7.2 Simulation Verification Using Run-2 Data
7.3 Dependence of the Noise on the Shaping Time
7.4 The Analog Readout Electronics and the ADC
7.5 The Optimal Filter (OF)
7.6 The Wiener Filter
7.7 The Wiener Filter with Forward Correction (WFFC)
7.8 Final Comparison and Conclusions
8 Conclusions and Outlook
Appendice
Recommended from our members
Complexity-reduced hardware-based track-trigger for CMS upgrade
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC)
is designed to study the results of proton-proton collisions. The Tracker
sub-detector is designed to detect and reconstruct the trajectories of charged
particles produced by the collisions. During the lifetime of the CMS detector,
there have been several upgrades aimed at increasing the chance of discovering
new physics through increased luminosity levels and instrumentation of
advanced technology. The High-Luminosity upgrade optimises the LHC to
accelerate high-energy particles with an average of 200 proton-proton
interactions per bunch crossing. The Level-1 Trigger system promptly analyses
and filters collisions using hardware to reduce the data volume in real-time. For
the upgrade, the trigger mechanism will use a particle trajectory estimator that
discriminates between particles based on their transverse momentum (pT ).
Particles with pT ≥ 2 GeV/c will be transmitted to the Level-1 Track-Trigger
system for trajectory reconstruction within a fixed 3 ÎĽs latency. This thesis
presents a novel Hardware-based Multivariate Linear Fitter (MVLF) system
focusing on robustness in tracking efficiency and reduction in logic resource
usage within the specified latency. The system components are implemented in
Field Programmable Gate Arrays (FPGA), targeting 16 nm FinFET UltraScale+
silicon technology. The development was performed using the High-Level
Synthesis (HLS) automation tools and the Hardware acceleration platform for
Application-Specific Integrated Circuits (ASIC). A firmware demonstrator has
been assembled to verify the feasibility and compatibility of the scaled system
with the CMS Level-1 Track-Trigger infrastructure. The system’s performance is
compared to past and current system developments, and the results are
presented accordingly
- …