6 research outputs found
Measurement of the high mass drell-yan differential cross section in the electron-positron channel with the ATLAS experiment at sqrt(s) = 7 TeV
This thesis reports the studies for the a new beam pipe within the ATLAS experiment and of the high mass Drell-Yan (DY) differential cross-section measurement using 4.9 fb-1 of data recorded in 2011. A study using secondary hadronic interaction vertices in the 2010-2012 data indicates excellent stability. A simulation study for the design of a new beam pipe estimates the gain in radiation length when changing the material and shows no significant difference between three positions for a new vacuum flange. For the DY cross-section measurement, the electron identification efficiency is evaluated up to a transverse energy ET of 500 GeV. The results agree well with previous measurements below 50 GeV and Monte Carlo simulations above that. The DY differential cross-section is reported as a function of the electron-positron invariant mass, mee, for 116 - 1500 GeV within the fiducial volume. The results are largely consistent with the theoretical predictions
Recent progress with the top to bottom approach to vectorization in GeantV
SIMD acceleration can potentially boost by factors the application throughput. Achieving efficient SIMD vectorization for scalar code with complex data flow and branching logic, goes however way beyond breaking some loop dependencies and relying on the compiler. Since the refactoring effort scales with the number of lines of code, it is important to understand what kind of performance gains can be expected in such complex cases. We started to investigate a couple of years ago a top to bottom vectorization approach to particle transport simulation. Percolating vector data to algorithms was mandatory since not all the components can internally vectorize. Vectorizing low-level algorithms is certainly necessary, but not sufficient to achieve relevant SIMD gains. In addition, the overheads for maintaining the concurrent vector data flow and copy data have to be minimized. In the context of a vectorization R&D for simulation we developed a framework to allow different categories of scalar and vectorized components to co-exist, dealing with data flow management and real-time heuristic optimizations. The paper describes our approach on coordinating SIMD vectorization at framework level, making a detailed quantitative analysis of the SIMD gain versus overheads, with a breakdown by components in terms of geometry, physics and magnetic field propagation. We also present the more general context of this R&D work and goals for 2018
Recent progress with the top to bottom approach to vectorization in GeantV
SIMD acceleration can potentially boost by factors the application throughput. Achieving efficient SIMD vectorization for scalar code with complex data flow and branching logic, goes however way beyond breaking some loop dependencies and relying on the compiler. Since the refactoring effort scales with the number of lines of code, it is important to understand what kind of performance gains can be expected in such complex cases. We started to investigate a couple of years ago a top to bottom vectorization approach to particle transport simulation. Percolating vector data to algorithms was mandatory since not all the components can internally vectorize. Vectorizing low-level algorithms is certainly necessary, but not sufficient to achieve relevant SIMD gains. In addition, the overheads for maintaining the concurrent vector data flow and copy data have to be minimized. In the context of a vectorization R&D for simulation we developed a framework to allow different categories of scalar and vectorized components to co-exist, dealing with data flow management and real-time heuristic optimizations. The paper describes our approach on coordinating SIMD vectorization at framework level, making a detailed quantitative analysis of the SIMD gain versus overheads, with a breakdown by components in terms of geometry, physics and magnetic field propagation. We also present the more general context of this R&D work and goals for 2018
Electromagnetic physics vectorization in the GeantV transport framework
The development of the GeantV Electromagnetic (EM) physics package has evolved following two necessary paths towards code modernization. A first phase required the revision of the main electromagnetic physics models and their implementation. The main objectives were to improve their accuracy, extend them to the new high-energy frontier posed by the Future Circular Collider (FCC) programme and allow a better adaptation to a multi-particle flow. Most of the EM physics models in GeantV have been reviewed from theoretical perspective and rewritten with vector-friendly implementations, being now available in scalar mode in the alpha release. The second phase consists of a thorough investigation on the possibility to vectorise the most CPU-intensive physics code parts, such as final state sampling. We have shown the feasibility of implementing electromagnetic physics models that take advantage of SIMD/SIMT architectures, thus obtaining gains in performance. After this phase, the time has come for the GeantV project to take a step forward towards the final proof of concept. This takes shape through the testing of the full simulation chain (transport + physics + geometry) running in vectorized mode. In this paper we will present the first benchmark results obtained after vectorizing a full set of electromagnetic physics models
Electromagnetic physics vectorization in the GeantV transport framework
The development of the GeantV Electromagnetic (EM) physics package has evolved following two necessary paths towards code modernization. A first phase required the revision of the main electromagnetic physics models and their implementation. The main objectives were to improve their accuracy, extend them to the new high-energy frontier posed by the Future Circular Collider (FCC) programme and allow a better adaptation to a multi-particle flow. Most of the EM physics models in GeantV have been reviewed from theoretical perspective and rewritten with vector-friendly implementations, being now available in scalar mode in the alpha release. The second phase consists of a thorough investigation on the possibility to vectorise the most CPU-intensive physics code parts, such as final state sampling. We have shown the feasibility of implementing electromagnetic physics models that take advantage of SIMD/SIMT architectures, thus obtaining gains in performance. After this phase, the time has come for the GeantV project to take a step forward towards the final proof of concept. This takes shape through the testing of the full simulation chain (transport + physics + geometry) running in vectorized mode. In this paper we will present the first benchmark results obtained after vectorizing a full set of electromagnetic physics models
Search for Scalar Diphoton Resonances in the Mass Range GeV with the ATLAS Detector in Collision Data at = 8
A search for scalar particles decaying via narrow resonances into two photons in the mass range 65–600 GeV is performed using of collision data collected with the ATLAS detector at the Large Hadron Collider. The recently discovered Higgs boson is treated as a background. No significant evidence for an additional signal is observed. The results are presented as limits at the 95% confidence level on the production cross section of a scalar boson times branching ratio into two photons, in a fiducial volume where the reconstruction efficiency is approximately independent of the event topology. The upper limits set extend over a considerably wider mass range than previous searches