685 research outputs found
The artificial retina processor for track reconstruction at the LHC crossing rate
We present results of an R&D study for a specialized processor capable of
precisely reconstructing, in pixel detectors, hundreds of charged-particle
tracks from high-energy collisions at 40 MHz rate. We apply a highly parallel
pattern-recognition algorithm, inspired by studies of the processing of visual
images by the brain as it happens in nature, and describe in detail an
efficient hardware implementation in high-speed, high-bandwidth FPGA devices.
This is the first detailed demonstration of reconstruction of offline-quality
tracks at 40 MHz and makes the device suitable for processing Large Hadron
Collider events at the full crossing frequency.Comment: 4th draft of WIT proceedings modified according to JINST referee's
comments. 10 pages, 6 figures, 2 table
Trigger design studies at future high-luminosity colliders
The LHC will enter in 2026 its high-luminosity phase which will deliver a peak instantaneous luminosity of cm s and produce events with an average pile-up of 200. In order to pursue its ambitious physics programme, the CMS experiment will undergo a major upgrade. The level-1 trigger will be replaced with a new system able to run the particle flow algorithm. An algorithm that reconstructs jets and computes energy sums from particles found by the particle flow algorithm is presented in this thesis. The algorithm is able to provide similar performance to offline reconstruction and keep the same threshold as in the previous CMS runs. The algorithm was implemented in firmware and tested on Xilinx FPGA. An agreement rate of 96% was obtained in a small-scale demonstrator setup running on a Xilinx FPGA. The full-scale algorithm is expected to use around 41.5% of LUTs, 11.6% of flip-flops, and 2.9% of DSPs of a Xilinx VU9P FPGA running at the frequency of 360 MHz. The FCC-hh project studies the feasibility of a hadron collider operating at the centre-of-mass energy of 100 TeV after the LHC operations have ended. The collider is expected to operate at a base instantaneous luminosity of cm s, and reach a peak value of cm s corresponding to an average pile-up of 200 and 1000, respectively. Rates of a trigger system of a detector at FCC-hh were estimated by scaling rates of the Phase-2 CMS level-1 trigger and by developing a parameterised simulation of the Phase-1 trigger system. The results showed that at the instantaneous luminosity of cm s the 100-kHz threshold is expected at 85 GeV, 170 GeV, and 350 GeV for single muon, e/, and jet triggers, respectively
The model of an anomaly detector for HiLumi LHC magnets based on Recurrent Neural Networks and adaptive quantization
This paper focuses on an examination of an applicability of Recurrent Neural
Network models for detecting anomalous behavior of the CERN superconducting
magnets. In order to conduct the experiments, the authors designed and
implemented an adaptive signal quantization algorithm and a custom GRU-based
detector and developed a method for the detector parameters selection. Three
different datasets were used for testing the detector. Two artificially
generated datasets were used to assess the raw performance of the system
whereas the 231 MB dataset composed of the signals acquired from HiLumi magnets
was intended for real-life experiments and model training. Several different
setups of the developed anomaly detection system were evaluated and compared
with state-of-the-art OC-SVM reference model operating on the same data. The
OC-SVM model was equipped with a rich set of feature extractors accounting for
a range of the input signal properties. It was determined in the course of the
experiments that the detector, along with its supporting design methodology,
reaches F1 equal or very close to 1 for almost all test sets. Due to the
profile of the data, the best_length setup of the detector turned out to
perform the best among all five tested configuration schemes of the detection
system. The quantization parameters have the biggest impact on the overall
performance of the detector with the best values of input/output grid equal to
16 and 8, respectively. The proposed solution of the detection significantly
outperformed OC-SVM-based detector in most of the cases, with much more stable
performance across all the datasets.Comment: Related to arXiv:1702.0083
On the use of heterogenous computing in high-energy particle physics at the ATLAS detector
A dissertation submitted in fulfillment of the requirements
for the degree of Master of Physics
in the
School of Physics
November 1, 2017.The ATLAS detector at the Large Hadron Collider (LHC) at CERN is
undergoing upgrades to its instrumentation, as well as the hardware and
software that comprise its Trigger and Data Acquisition (TDAQ) system.
The increased energy will yield larger cross sections for interesting physics
processes, but will also lead to increased artifacts in on-line reconstruction
in the trigger, as well as increased trigger rates, beyond the current system’s
capabilities. To meet these demands it is likely that the massive parallelism
of General-Purpose Programming with Graphic Processing Units (GPGPU)
will be utilised. This dissertation addresses the problem of integrating GPGPU
into the existing Trigger and TDAQ platforms; detailing and analysing
GPGPU performance in the context of performing in a high-throughput,
on-line environment like ATLAS. Preliminary tests show low to moderate
speed-up with GPU relative to CPU, indicating that to achieve a more significant
performance increase it may be necessary to alter the current platform
beyond pairing suitable GPUs to CPUs in an optimum ratio. Possible
solutions are proposed and recommendations for future work are given.LG201
Systems and algorithms for low-latency event reconsturction for upgrades of the level-1 triger of the CMS experiment at CERN
With the increasing centre-of-mass energy and luminosity of the Large Hadron Collider
(LHC), the Compact Muon Experiment (CMS) is undertaking upgrades to its triggering system
in order to maintain its data-taking efficiency. In 2016, the Phase-1 upgrade to the CMS Level-
1 Trigger (L1T) was commissioned which required the development of tools for validation of
changes to the trigger algorithm firmware and for ongoing monitoring of the trigger system
during data-taking. A Phase-2 upgrade to the CMS L1T is currently underway, in preparation
for the High-Luminosity upgrade of the LHC (HL-LHC). The HL-LHC environment is expected
to be particularly challenging for the CMS L1T due to the increased number of simultaneous
interactions per bunch crossing, known as pileup. In order to mitigate the effect of pileup, the
CMS Phase-2 Outer Tracker is being upgraded with capabilities which will allow it to provide
tracks to the L1T for the first time.
A key to mitigating pileup is the ability to identify the location and decay products of the signal
vertex in each event. For this purpose, two conventional algorithms have been investigated, with
a baseline being proposed and demonstrated in FPGA hardware. To extend and complement the
baseline vertexing algorithm, Machine Learning techniques were used to evaluate how different
track parameters can be included in the vertex reconstruction process. This work culminated
in the creation of a deep convolutional neural network, capable of both position reconstruction
and association through the intermediate storage of tracks into a z histogram where the optimal
weighting of each track can be learned. The position reconstruction part of this end-to-end model
was implemented and when compared to the baseline algorithm, a 30% improvement on the
vertex position resolution in tt̄ events was observed.Open Acces
A SciFi tracker for the LHCb experiment
The quest to understand the prevalence of matter over antimatter in the observable universe drives the Large Hadron Collider Beauty (LHCb) Experiment at CERN, situated beneath the France-Switzerland border. This thesis focuses on a detector upgrade crucial to enhance the sensitivity of the LHCb Experiment. A key ingredient of this upgrade is the Scintillating Fiber Detector (SciFi) Tracker.The introduction of the SciFi replaced key components like the Outer and Inner Tracker, improving tracking efficiency and spatial resolution.To ensure SciFi's radiation resilience, comprehensive tests were conducted, that revealed effects on Field-Programmable Gate Arrays (FPGAs), including speed degradation, leakage current, re-programmability loss, Single Event Upsets (SEU), and Single Event Latch-ups (SEL).Results indicated that speed degradation, leakage current, and SELs were manageable during the detector's lifetime. However, FPGAs became unprogrammable after a certain radiation exposure, necessitating operational planning. Mitigation strategies, like triple modular redundancy, reduced SEUs to an acceptable level.Mass-produced SciFi modules and readout electronics underwent their first particle beam test, allowing optimization of operating parameters of the front-end electronics, such as clustering coefficients, thresholds, and shaper settings.Resolution analysis demonstrated compliance with detector specifications. With an efficiency surpassing 99\% and a spatial resolution better than 70 µm, SciFi is validated for LHCb operation.As SciFi is commissioned, the configurations explored in this thesis offer valuable insights for optimizing the detector during commissioning and beyond
A FPGA-based architecture for real-time cluster finding in the LHCb silicon pixel detector
The data acquisition system of the LHCb experiment has been substantially
upgraded for the LHC Run 3, with the unprecedented capability of reading out
and fully reconstructing all proton–proton collisions in real time, occurring
with an average rate of 30 MHz, for a total data flow of approximately
32 Tb/s. The high demand of computing power required by this task has
motivated a transition to a hybrid heterogeneous computing architecture,
where a farm of graphics cores, GPUs, is used in addition to general–purpose
processors, CPUs, to speed up the execution of reconstruction algorithms. In
a continuing effort to improve real–time processing capabilities of this new
DAQ system, also with a view to further luminosity increases in the future,
low–level, highly–parallelizable tasks are increasingly being addressed at the
earliest stages of the data acquisition chain, using special–purpose computing
accelerators. A promising solution is offered by custom–programmable FPGA
devices, that are well suited to perform high–volume computations with
high throughput and degree of parallelism, limited power consumption and
latency. In this context, a two–dimensional FPGA–friendly cluster–finder
algorithm has been developed to reconstruct hit positions in the new vertex
pixel detector (VELO) of the LHCb Upgrade experiment. The associated
firmware architecture, implemented in VHDL language, has been integrated
within the VELO readout, without the need for extra cards, as a further
enhancement of the DAQ system. This pre–processing allows the first level
of the software trigger to accept a 11% higher rate of events, as the ready–
made hit coordinates accelerate the track reconstruction, while leading to a
drop in electrical power consumption, as the FPGA implementation requires
O(50x) less power than the GPU one. The tracking performance of this novel
system, being indistinguishable from a full–fledged software implementation,
allows the raw pixel data to be dropped immediately at the readout level,
yielding the additional benefit of a 14% reduction in data flow. The clustering
architecture has been commissioned during the start of LHCb Run 3 and it
currently runs in real time during physics data taking, reconstructing VELO
hit coordinates on–the–fly at the LHC collision rate
Nanosecond anomaly detection with decision trees for high energy physics and real-time application to exotic Higgs decays
We present a novel implementation of the artificial intelligence autoencoding
algorithm, used as an ultrafast and ultraefficient anomaly detector, built with
a forest of deep decision trees on FPGA, field programmable gate arrays.
Scenarios at the Large Hadron Collider at CERN are considered, for which the
autoencoder is trained using known physical processes of the Standard Model.
The design is then deployed in real-time trigger systems for anomaly detection
of new unknown physical processes, such as the detection of exotic Higgs
decays, on events that fail conventional threshold-based algorithms. The
inference is made within a latency value of 25 ns, the time between successive
collisions at the Large Hadron Collider, at percent-level resource usage. Our
method offers anomaly detection at the lowest latency values for edge AI users
with tight resource constraints.Comment: 26 pages, 9 figures, 1 tabl
- …