689 research outputs found
Optimization of high-throughput real-time processes in physics reconstruction
La presente tesis se ha desarrollado en colaboración entre
la Universidad de Sevilla y la Organización Europea para la
Investigación Nuclear, CERN.
El detector LHCb es uno de los cuatro grandes detectores
situados en el Gran Colisionador de Hadrones, LHC. En LHCb,
se colisionan partÃculas a altas energÃas para comprender la
diferencia existente entre la materia y la antimateria. Debido a la
cantidad ingente de datos generada por el detector, es necesario
realizar un filtrado de datos en tiempo real, fundamentado en
los conocimientos actuales recogidos en el Modelo Estándar de
fÃsica de partÃculas. El filtrado, también conocido como High
Level Trigger, deberá procesar un throughput de 40 Tb/s de datos,
y realizar un filtrado de aproximadamente 1 000:1, reduciendo
el throughput a unos 40 Gb/s de salida, que se almacenan para
posterior análisis.
El proceso del High Level Trigger se subdivide a su vez en
dos etapas: High Level Trigger 1 (HLT1) y High Level Trigger
2 (HLT2). El HLT1 transcurre en tiempo real, y realiza una reducción de datos de aproximadamente 30:1. El HLT1 consiste
en una serie de procesos software que reconstruyen lo que ha
sucedido en la colisión de partÃculas. En la reconstrucción del
HLT1 únicamente se analizan las trayectorias de las partÃculas
producidas fruto de la colisión, en un problema conocido como
reconstrucción de trazas, para dictaminar el interés de las colisiones.
Por contra, el proceso HLT2 es más fino, requiriendo más
tiempo en realizarse y reconstruyendo todos los subdetectores
que componen LHCb.
Hacia 2020, el detector LHCb, asà como todos los componentes
del sistema de adquisici´on de datos, serán actualizados acorde
a los últimos desarrollos técnicos. Como parte del sistema
de adquisición de datos, los servidores que procesan HLT1 y
HLT2 también sufrirán una actualización. Al mismo tiempo, el
acelerador LHC será también actualizado, de manera que la
cantidad de datos generada en cada cruce de grupo de partÃculas
aumentare en aproxidamente 5 veces la actual. Debido a
las actualizaciones tanto del acelerador como del detector, se
prevé que la cantidad de datos que deberá procesar el HLT en
su totalidad sea unas 40 veces mayor a la actual.
La previsión de la escalabilidad del software actual a 2020
subestim´ó los recursos necesarios para hacer frente al incremento
en throughput. Esto produjo que se pusiera en marcha un
estudio de todos los algoritmos tanto del HLT1 como del HLT2,
asà como una actualización del código a nuevos estándares, para
mejorar su rendimiento y ser capaz de procesar la cantidad de
datos esperada.
En esta tesis, se exploran varios algoritmos de la reconstrucción de LHCb. El problema de reconstrucción de trazas se analiza
en profundidad y se proponen nuevos algoritmos para su
resolución. Ya que los problemas analizados exhiben un paralelismo
masivo, estos algoritmos se implementan en lenguajes
especializados para tarjetas gráficas modernas (GPUs), dada su
arquitectura inherentemente paralela. En este trabajo se dise Ëœnan
dos algoritmos de reconstrucción de trazas. Además, se diseñan
adicionalmente cuatro algoritmos de decodificación y un algoritmo
de clustering, problemas también encontrados en el HLT1.
Por otra parte, se diseña un algoritmo para el filtrado de Kalman,
que puede ser utilizado en ambas etapas.
Los algoritmos desarrollados cumplen con los requisitos esperados
por la colaboración LHCb para el año 2020. Para poder
ejecutar los algoritmos eficientemente en tarjetas gráficas, se
desarrolla un framework especializado para GPUs, que permite
la ejecución paralela de secuencias de reconstrucción en GPUs.
Combinando los algoritmos desarrollados con el framework, se
completa una secuencia de ejecución que asienta las bases para
un HLT1 ejecutable en GPU.
Durante la investigación llevada a cabo en esta tesis, y gracias
a los desarrollos arriba mencionados y a la colaboración de
un pequeño equipo de personas coordinado por el autor, se
completa un HLT1 ejecutable en GPUs. El rendimiento obtenido
en GPUs, producto de esta tesis, permite hacer frente al reto de
ejecutar una secuencia de reconstrucción en tiempo real, bajo
las condiciones actualizadas de LHCb previstas para 2020. As´ı
mismo, se completa por primera vez para cualquier experimento
del LHC un High Level Trigger que se ejecuta únicamente en
GPUs. Finalmente, se detallan varias posibles configuraciones
para incluir tarjetas gr´aficas en el sistema de adquisición de
datos de LHCb.The current thesis has been developed in collaboration between
Universidad de Sevilla and the European Organization for Nuclear
Research, CERN.
The LHCb detector is one of four big detectors placed alongside
the Large Hadron Collider, LHC. In LHCb, particles are
collided at high energies in order to understand the difference
between matter and antimatter. Due to the massive quantity
of data generated by the detector, it is necessary to filter data
in real-time. The filtering, also known as High Level Trigger,
processes a throughput of 40 Tb/s of data and performs a selection
of approximately 1 000:1. The throughput is thus reduced
to roughly 40 Gb/s of data output, which is then stored for
posterior analysis.
The High Level Trigger process is subdivided into two stages:
High Level Trigger 1 (HLT1) and High Level Trigger 2 (HLT2).
HLT1 occurs in real-time, and yields a reduction of data of approximately
30:1. HLT1 consists in a series of software processes
that reconstruct particle collisions. The HLT1 reconstruction only
analyzes the trajectories of particles produced at the collision,
solving a problem known as track reconstruction, that determines
whether the collision data is kept or discarded. In contrast,
HLT2 is a finer process, which requires more time to execute
and reconstructs all subdetectors composing LHCb.
Towards 2020, the LHCb detector and all the components
composing the data acquisition system will be upgraded. As
part of the data acquisition system, the servers that process
HLT1 and HLT2 will also be upgraded. In addition, the LHC
accelerator will also be updated, increasing the data generated in
every bunch crossing by roughly 5 times. Due to the accelerator
and detector upgrades, the amount of data that the HLT will
require to process is expected to increase by 40 times.
The foreseen scalability of the software through 2020 underestimated
the required resources to face the increase in data
throughput. As a consequence, studies of all algorithms composing
HLT1 and HLT2 and code modernizations were carried
out, in order to obtain a better performance and increase the
processing capability of the foreseen hardware resources in the
upgrade.
In this thesis, several algorithms of the LHCb recontruction
are explored. The track reconstruction problem is analyzed
in depth, and new algorithms are proposed. Since the analyzed
problems are massively parallel, these algorithms are implemented
in specialized languages for modern graphics cards
(GPUs), due to their inherently parallel architecture. From this
work stem two algorithm designs. Furthermore, four additional
decoding algorithms and a clustering algorithms have been designed
and implemented, which are also part of HLT1. Apart
from that, an parallel Kalman filter algorithm has been designed
and implemented, which can be used in both HLT stages.
The developed algorithms satisfy the requirements of the
LHCb collaboration for the LHCb upgrade. In order to execute
the algorithms efficiently on GPUs, a software framework specialized
for GPUs is developed, which allows executing GPU
reconstruction sequences in parallel. Combining the developed
algorithms with the framework, an execution sequence is completed
as the foundations of a GPU HLT1.
During the research carried out in this thesis, the aforementioned
developments and a small group of collaborators coordinated
by the author lead to the completion of a full GPU
HLT1 sequence. The performance obtained on GPUs allows
executing a reconstruction sequence in real-time, under LHCb
upgrade conditions. The developed GPU HLT1 constitutes the
first GPU high level trigger ever developed for an LHC experiment.
Finally, various possible realizations of the GPU HLT1 to
integrate in a production GPU-equipped data acquisition system
are detailed
Search by triplet: An efficient local track reconstruction algorithm for parallel architectures
Millions of particles are collided every second at the LHCb detector placed inside the Large Hadron Collider at CERN. The particles produced as a result of these collisions pass through various detecting devices which will produce a combined raw data rate of up to 40 Tbps by 2021. These data will be fed through a data acquisition system which reconstructs individual particles and filters the collision events in real time. This process will occur in a heterogeneous farm employing exclusively off-the-shelf CPU and GPU hardware, in a two stage process known as High Level Trigger. The reconstruction of charged particle trajectories in physics detectors, also referred to as track reconstruction or tracking, determines the position, charge and momentum of particles as they pass through detectors. The Vertex Locator subdetector (VELO) is the closest such detector to the beamline, placed outside of the region where the LHCb magnet produces a sizable magnetic field. It is used to reconstruct straight particle trajectories which serve as seeds for reconstruction of other subdetectors and to locate collision vertices. The VELO subdetector will detect up to 109 particles every second, which need to be reconstructed in real time in the High Level Trigger. We present Search by triplet, an efficient track reconstruction algorithm. Our algorithm is designed to run efficiently across parallel architectures. We extend on previous work and explain the algorithm evolution since its inception. We show the scaling of our algorithm under various situations, and analyse its amortized time in terms of complexity for each of its constituent parts and profile its performance. Our algorithm is the current state-of-the-art in VELO track reconstruction on SIMT architectures, and we qualify its improvements over previous results
Fast algorithm for real-time rings reconstruction
The GAP project is dedicated to study the application of GPU in several contexts in which
real-time response is important to take decisions. The definition of real-time depends on
the application under study, ranging from answer time of μs up to several hours in case
of very computing intensive task. During this conference we presented our work in low
level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and
specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6].
Apart from the study of dedicated solution to decrease the latency due to data transport
and preparation, the computing algorithms play an essential role in any GPU application.
In this contribution, we show an original algorithm developed for triggers application, to
accelerate the ring reconstruction in RICH detector when it is not possible to have seeds
for reconstruction from external trackers
Allen: A high-level trigger on GPUs for LHCb
Documento escrito por un elevado número de autores/as, solo se referencia el/la que aparece en primer lugar y los/as autores/as pertenecientes a la UC3M.We describe a fully GPU-based implementation of the first level trigger for the upgrade of the LHCb detector, due to start data taking in 2021. We demonstrate that our implementation, named Allen, can process the 40 Tbit/s data rate of the upgraded LHCb detector and perform a wide variety of pattern recognition tasks. These include finding the trajectories of charged particles, finding proton-proton collision points, identifying particles as hadrons or muons, and finding the displaced decay vertices of long-lived particles. We further demonstrate that Allen can be implemented in around 500 scientific or consumer GPU cards, that it is not I/O bound, and can be operated at the full LHC collision rate of 30 MHz. Allen is the first complete high-throughput GPU trigger proposed for a HEP experiment.We would like to thank N. Neufeld and T. Colombo for many fruitful discussions. We also thank the LHCb RTA team for supporting this publication and reviewing this work. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (The Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MSHE (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (UK); DOE NP and NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (The Netherlands), PIC (Spain), GridPP (UK), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open-source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany); EPLANET, Marie SkÅ‚odowska-Curie Actions and ERC (European Union); ANR, Labex P2IO and OCEVU, and Région Auvergne-Rhône-Alpes (France); Key Research Program of Frontier Sciences of CAS, CAS PIFI, and the Thousand Talents Program (China); RFBR, RSF and Yandex LLC (Russia); GVA, XuntaGal and GENCAT (Spain); the Royal Society and the Leverhulme Trust (UK). J. Albrecht acknowledges support of the European Research Council Starting Grant PRECISION 714536. D. vom Bruch, V. V. Gligorov, F. Reiss and R. Quagliani acknowledge support of the European Research Council Consolidator Grant RECEPT 724777. H. Stevens, L. Funke acknowledge support of the Collaborative Research Center SFB 876. T. Boettcher and M. Williams are supported by US NSF Grant PHY-1912836. D. Craik is supported by US NSF Grants OAC-1836650 and PHY-1904160. A. Ustyuzhanin is supported by the Russian Science Foundation Grant Agreement No. 19-71-30020. D. MartÃnez Santos and A. Brea RodrÃguez acknowledge support from the European Research Council Starting Grant BSMFLEET 639068
MLPF: Efficient machine-learned particle-flow reconstruction using graph neural networks
In general-purpose particle detectors, the particle-flow algorithm may be
used to reconstruct a comprehensive particle-level view of the event by
combining information from the calorimeters and the trackers, significantly
improving the detector resolution for jets and the missing transverse momentum.
In view of the planned high-luminosity upgrade of the CERN Large Hadron
Collider (LHC), it is necessary to revisit existing reconstruction algorithms
and ensure that both the physics and computational performance are sufficient
in an environment with many simultaneous proton-proton interactions (pileup).
Machine learning may offer a prospect for computationally efficient event
reconstruction that is well-suited to heterogeneous computing platforms, while
significantly improving the reconstruction quality over rule-based algorithms
for granular detectors. We introduce MLPF, a novel, end-to-end trainable,
machine-learned particle-flow algorithm based on parallelizable,
computationally efficient, and scalable graph neural networks optimized using a
multi-task objective on simulated events. We report the physics and
computational performance of the MLPF algorithm on a Monte Carlo dataset of top
quark-antiquark pairs produced in proton-proton collisions in conditions
similar to those expected for the high-luminosity LHC. The MLPF algorithm
improves the physics response with respect to a rule-based benchmark algorithm
and demonstrates computationally scalable particle-flow reconstruction in a
high-pileup environment.Comment: 15 pages, 10 figure
A FPGA-based architecture for real-time cluster finding in the LHCb silicon pixel detector
The data acquisition system of the LHCb experiment has been substantially
upgraded for the LHC Run 3, with the unprecedented capability of reading out
and fully reconstructing all proton–proton collisions in real time, occurring
with an average rate of 30 MHz, for a total data flow of approximately
32 Tb/s. The high demand of computing power required by this task has
motivated a transition to a hybrid heterogeneous computing architecture,
where a farm of graphics cores, GPUs, is used in addition to general–purpose
processors, CPUs, to speed up the execution of reconstruction algorithms. In
a continuing effort to improve real–time processing capabilities of this new
DAQ system, also with a view to further luminosity increases in the future,
low–level, highly–parallelizable tasks are increasingly being addressed at the
earliest stages of the data acquisition chain, using special–purpose computing
accelerators. A promising solution is offered by custom–programmable FPGA
devices, that are well suited to perform high–volume computations with
high throughput and degree of parallelism, limited power consumption and
latency. In this context, a two–dimensional FPGA–friendly cluster–finder
algorithm has been developed to reconstruct hit positions in the new vertex
pixel detector (VELO) of the LHCb Upgrade experiment. The associated
firmware architecture, implemented in VHDL language, has been integrated
within the VELO readout, without the need for extra cards, as a further
enhancement of the DAQ system. This pre–processing allows the first level
of the software trigger to accept a 11% higher rate of events, as the ready–
made hit coordinates accelerate the track reconstruction, while leading to a
drop in electrical power consumption, as the FPGA implementation requires
O(50x) less power than the GPU one. The tracking performance of this novel
system, being indistinguishable from a full–fledged software implementation,
allows the raw pixel data to be dropped immediately at the readout level,
yielding the additional benefit of a 14% reduction in data flow. The clustering
architecture has been commissioned during the start of LHCb Run 3 and it
currently runs in real time during physics data taking, reconstructing VELO
hit coordinates on–the–fly at the LHC collision rate
Coprocessor integration for real-time event processing in particle physics detectors
Els experiments de fÃsica d’altes energies actuals disposen d’acceleradors amb més energÃa, sensors més precisos i formes més flexibles de recopilar les dades. Aquesta rà pida evolució requereix de més capacitat de cà lcul; els processadors massivament paral·lels, com ara les targes acceleradores grà fiques, ens posen a l’abast aquesta major capacitat de cà lcul a un cost sensiblement inferior a les CPUs tradicionals. L’ús d’aquest tipus de processadors requereix, però, de nous algoritmes i nous enfocaments de l’organització de les dades que són difÃcils d’integrar en els programaris actuals.
En aquest treball s’exploren els problemes derivats de l’ús d’algoritmes paral·lels en els entorns de programari existents, orientats a CPUs, i es proposa una solució, en forma de servei, que comunica amb els diversos pipelines que processen els esdeveniments procedents de les col·lisions de partÃcules, recull les dades en lots i els envia als algoritmes corrent sobre els processadors massivament paral·lels.
Aquest servei s’integra en Gaudà - l’entorn de software de dos dels quatre experiments principals del Gran Col·lisionador d’Hadrons. S’examina el sobrecost que el servei afegeix als algoritmes paral·lels. S’estudia un cas d´ùs del servei per fer una reconstrucció paral·lela de les traces detectades en el VELO Pixel, el subdetector encarregat de la detecció de vèrtex en l’upgrade de LHCb. Per aquest cas, s’observen les caracterÃstiques del rendiment en funció de la mida dels lots de dades. Finalment, les conclusions en posen en el context dels requeriments del sistema de trigger de LHCb.La fÃsica de altas energÃas dispone actualmente de aceleradores con energÃas mayores, sensores más precisos y métodos de recopilación de datos más flexibles que nunca. Su rápido progreso necesita aún más potencia de cálculo; el hardware masivamente paralelo, como las unidades de procesamiento gráfico, nos brinda esta potencia a un coste mucho más bajo que las CPUs tradicionales. Sin embargo, para usar eficientemente este hardware necesitamos algoritmos nuevos y nuevos enfoques de organización de datos difÃciles de integrarse con el software existente.
En este trabajo, se investiga cómo se pueden usar estos algoritmos paralelos en las infraestructuras de software ya existentes y que están orientadas a CPUs. Se propone una solución en forma de un servicio que comunica con los diversos pipelines que procesan los eventos de las correspondientes colisiones de particulas, reúne los datos en lotes y se los entrega a los algoritmos paralelos acelerados por hardware.
Este servicio se integra con Gaudà — la infraestructura del entorno de software que usan dos de los cuatro gran experimentos del Gran Colisionador de Hadrones. Se examinan los costes añadidos por el servicio en los algoritmos paralelos. Se estudia un caso de uso del servicio para ejecutar un algoritmo paralelo para el VELO Pixel (el subdetector encargado de la localización de vértices en el upgrade del experimento LHCb) y se estudian las caracterÃsticas de rendimiento de los distintos tamaños de lotes de datos. Finalmente, las conclusiones se contextualizan dentro la perspectiva de los requerimientos para el sistema de trigger de LHCb.High-energy physics experiments today have higher energies, more accurate sensors, and more flexible means of data collection than ever before. Their rapid progress requires ever more computational power; and massively parallel hardware, such as graphics cards, holds the promise to provide this power at a much lower cost than traditional CPUs. Yet, using this hardware requires new algorithms and new approaches to organizing data that can be difficult to integrate with existing software.
In this work, I explore the problem of using parallel algorithms within existing CPU-orientated frameworks and propose a compromise between the different trade-offs. The solution is a service that communicates with multiple event-processing pipelines, gathers data into batches, and submits them to hardware-accelerated parallel algorithms.
I integrate this service with Gaudi — a framework underlying the software environments of two of the four major experiments at the Large Hadron Collider. I examine the overhead the service adds to parallel algorithms. I perform a case study of using the service to run a parallel track reconstruction algorithm for the LHCb experiment's prospective VELO Pixel subdetector and look at the performance characteristics of using different data batch sizes. Finally, I put the findings into perspective within the context of the LHCb trigger's requirements
- …