22 research outputs found

    RenderCore – a new WebGPU-based rendering engine for ROOT-EVE

    Get PDF
    ROOT-Eve (REve), the new generation of the ROOT event-display module, uses a web server-client model to guarantee exact data translation from the experiments’ data analysis frameworks to users’ browsers. Data is then displayed in various views, including high-precision 2D and 3D graphics views, currently driven by THREE.js rendering engine based on WebGL technology. RenderCore, a computer graphics research-oriented rendering engine, has been integrated into REve to optimize rendering performance and enable the use of state-of-the-art techniques for object highlighting and object selection. It also allowed for the implementation of optimized instanced rendering through the usage of custom shaders and rendering pipeline modifications. To further the impact of this investment and ensure the long-term viability of REve, RenderCore is being refactored on top of WebGPU, the next-generation GPU interface for browsers that supports compute shaders, storage textures and introduces significant improvements in GPU utilization. This has led to optimization of in-terchange data formats, decreased server-client traffic, and improved offloading of data visualization algorithms to the GPU. FireworksWeb, a physics analysis-oriented event display of the CMS experiment, is used to demonstrate the results, focusing on high-granularity calorimeters and targeting high data-volume events of heavy-ion collisions and High-Luminosity LHC. The next steps and directions are also discussed

    Reconstruction of Charged Particle Tracks in Realistic Detector Geometry Using a Vectorized and Parallelized Kalman Filter Algorithm

    Full text link
    One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filter-based methods for highly parallel, many-core SIMD and SIMT architectures that are now prevalent in high-performance hardware. Previously we observed significant parallel speedups, with physics performance comparable to CMS standard tracking, on Intel Xeon, Intel Xeon Phi, and (to a limited extent) NVIDIA GPUs. While early tests were based on artificial events occurring inside an idealized barrel detector, we showed subsequently that our mkFit software builds tracks successfully from complex simulated events (including detector pileup) occurring inside a geometrically accurate representation of the CMS-2017 tracker. Here, we report on advances in both the computational and physics performance of mkFit, as well as progress toward integration with CMS production software. Recently we have improved the overall efficiency of the algorithm by preserving short track candidates at a relatively early stage rather than attempting to extend them over many layers. Moreover, mkFit formerly produced an excess of duplicate tracks; these are now explicitly removed in an additional processing step. We demonstrate that with these enhancements, mkFit becomes a suitable choice for the first iteration of CMS tracking, and eventually for later iterations as well. We plan to test this capability in the CMS High Level Trigger during Run 3 of the LHC, with an ultimate goal of using it in both the CMS HLT and offline reconstruction for the HL-LHC CMS tracker

    Generalizing mkFit and its Application to HL-LHC

    Get PDF
    mkFit is an implementation of the Kalman filter-based track reconstruction algorithm that exploits both threadand data-level parallelism. In the past few years the project transitioned from the R&D phase to deployment in the Run-3 offline workflow of the CMS experiment. The CMS tracking performs a series of iterations, targeting reconstruction of tracks of increasing difficulty after removing hits associated to tracks found in previous iterations. mkFit has been adopted for several of the tracking iterations, which contribute to the majority of reconstructed tracks. When tested in the standard conditions for production jobs, speedups in track pattern recognition are on average of the order of 3.5x for the iterations where it is used (3-7x depending on the iteration). Multiple factors contribute to the observed speedups, including vectorization and a lightweight geometry description, as well as improved memory management and single precision. Efficient vectorization is achieved with both the icc and the gcc (default in CMSSW) compilers and relies on a dedicated library for small matrix operations, Matriplex, which has recently been released in a public repository. While the mkFit geometry description already featured levels of abstraction from the actual Phase-1 CMS tracker, several components of the implementations were still tied to that specific geometry. We have further generalized the geometry description and the configuration of the run-time parameters, in order to enable support for the Phase-2 upgraded tracker geometry for the HL-LHC and potentially other detector configurations. The implementation strategy and high-level code changes required for the HL-LHC geometry are presented. Speedups in track building from mkFit imply that track fitting becomes a comparably time consuming step of the tracking chain. Prospects for an mkFit implementation of the track fit are also discussed

    Electron track reconstruction in the ATLAS experiment

    No full text
    Before entering the hardware production phase of a HEP experiment, the detector elements that have been chosen during the planning process need to be thoroughly tested. At the LHC, silicon detectors will operate in a high-rate environment which requires low-noise electronics with a shaping time of 25s25 s. A prototype silicon-strip half-module equipped with the analogue read-out chip SCTA128-HC was put in a 200GeV200GeV pion beam. An analysis of the collected data is presented. The tested module was found to conform to the SCT-modules specification for the ATLAS experiment. Electron reconstruction in the ATLAS detector is compromised by the large amount of material in the tracking volume, which leads to frequent emissions of hard bremsstrahlung photons. This affects the measurement of the transverse projections of track parameters in the inner detector as well as the measurement of energy and azimuthal angle in the EM calorimeter for p_T<20GeV. Reconstruction and electron identification efficiencies are both degraded. A detailed analysis of bremsstrahlung effects has been performed and it is shown that a first-order treatment is unsatisfactory. An algorithm for efficient and robust electron reconstruction has been developed which placed a strong emphasis on bremsstrahlung detection and recuperation so as to provide the best possible track parameters from the ID reconstruction. The algorithm was benchmarked against simulated data: single electrons, electrons in jets and electrons in the presence of high-luminosity pile-up. The BooprotectJpsioeeBo o protectJpsi o ee and ZoeeZ o ee processes, crucial for EM calorimeter calibration, as well as the potential Higgs discovery via HoZZ(star)o4eH o ZZ^{(star)} o 4e (for mHm_{H}=130, 150, 180, 200GeV200GeV), were studied using the developed electron reconstruction algorithm. Typically, a 10% improvement of reconstruction efficiency has been achieved for these channels with respect to the results quoted in the ATLAS emph{Physics Performance TDR}. At low and intermediate electron pTp_T values, the experimental parameter resolution has likewise been improved with a better understanding and handling of the bremsstrahlung effects

    Cosmic muons: events recorded during magnet test (November 2014)

    No full text
    The CMS magnet was switched on for a test in November. These events, recorded during the test, show cosmic muons. The event displays are produced using Fireworks

    RenderCore – a new WebGPU-based rendering engine for ROOT-EVE

    No full text
    ROOT-Eve (REve), the new generation of the ROOT event-display module, uses a web server-client model to guarantee exact data translation from the experiments’ data analysis frameworks to users’ browsers. Data is then displayed in various views, including high-precision 2D and 3D graphics views, currently driven by THREE.js rendering engine based on WebGL technology. RenderCore, a computer graphics research-oriented rendering engine, has been integrated into REve to optimize rendering performance and enable the use of state-of-the-art techniques for object highlighting and object selection. It also allowed for the implementation of optimized instanced rendering through the usage of custom shaders and rendering pipeline modifications. To further the impact of this investment and ensure the long-term viability of REve, RenderCore is being refactored on top of WebGPU, the next-generation GPU interface for browsers that supports compute shaders, storage textures and introduces significant improvements in GPU utilization. This has led to optimization of in-terchange data formats, decreased server-client traffic, and improved offloading of data visualization algorithms to the GPU. FireworksWeb, a physics analysis-oriented event display of the CMS experiment, is used to demonstrate the results, focusing on high-granularity calorimeters and targeting high data-volume events of heavy-ion collisions and High-Luminosity LHC. The next steps and directions are also discussed
    corecore