128 research outputs found
Persistent Data Layout and Infrastructure for Efficient Selective Retrieval of Event Data in ATLAS
The ATLAS detector at CERN has completed its first full year of recording
collisions at 7 TeV, resulting in billions of events and petabytes of data. At
these scales, physicists must have the capability to read only the data of
interest to their analyses, with the importance of efficient selective access
increasing as data taking continues. ATLAS has developed a sophisticated
event-level metadata infrastructure and supporting I/O framework allowing event
selections by explicit specification, by back navigation, and by selection
queries to a TAG database via an integrated web interface. These systems and
their performance have been reported on elsewhere. The ultimate success of such
a system, however, depends significantly upon the efficiency of selective event
retrieval. Supporting such retrieval can be challenging, as ATLAS stores its
event data in column-wise orientation using ROOT trees for a number of reasons,
including compression considerations, histogramming use cases, and more. For
2011 data, ATLAS will utilize new capabilities in ROOT to tune the persistent
storage layout of event data, and to significantly speed up selective event
reading. The new persistent layout strategy and its implications for I/O
performance are described in this paper.Comment: Proceedings of the DPF-2011 Conference, Providence, RI, August 8-13,
2011 8 page
Bryn Mawr College Yearbook. Class of 1922
https://repository.brynmawr.edu/bmc_yearbooks/1017/thumbnail.jp
Optimizing ATLAS data storage: The impact of compression algorithms on ATLAS physics analysis data formats
The increased footprint foreseen for Run-3 and HL-LHC data will soon expose the limits of currently available storage and CPU resources. Data formats are already optimized according to the processing chain for which they are designed. ATLAS events are stored in ROOT-based reconstruction output files called Analysis Object Data (AOD), which are then processed within the derivation framework to produce Derived AOD (DAOD) files. Numerous DAOD formats, tailored for specific physics and performance groups, have been in use throughout the ATLAS Run-2 phase. In view of Run-3, ATLAS has changed its analysis model, which entailed a significant reduction of the existing DAOD flavors. Two new formats, unfiltered, skimmable on read and designed to meet the requirements of the majority of the analysis workflows, have been proposed as replacements: DAOD_PHYS and DAOD_PHYSLITE, a smaller format containing already calibrated physics objects. As ROOT-based formats, they natively support four lossless compression algorithms: lzma, lz4, zlib and zstd. In this study, the effects of different compression settings on file size, compression time, compression factor and reading speed are investigated considering both DAOD_PHYS and DAOD_PHYSLITE formats. Moreover, the impact of the AutoFlush parameter controlling how in-memory data structures are serialized to ROOT files, has been evaluated. This study yields new quantitative results that can serve as a paradigm on how to make compression decisions for different ATLAS use cases. As an example, for both DAOD_PHYS and DAOD_PHYSLITE, the lz4 library exhibits the fastest reading speed, but results in the largest files, whereas the lzma algorithm provides larger compression factors at the cost of significantly slower reading speeds. In addition, guidelines for setting appropriate AutoFlush values are outlined
Integration of RNTuple in ATLAS Athena
After using ROOT’s TTree I/O subsystem for over two decades and storing more than an exabyte of compressed High Energy Physics (HEP) data, advances in technology have motivated a complete redesign, RNTuple, which breaks backward-compatibility to take better advantage of these storage options. The RNTuple I/O subsystem has been designed to address performance bottlenecks and other shortcomings of TTree. Specifically, RNTuple comes with an updated, more compact binary data format that can be stored both in ROOT files and natively in object stores. It is designed for modern storage hardware (e.g. high-throughput low-latency NVMe SSDs), and provides robust and easy to use interfaces. The binary format of RNTuple is scheduled to become production grade in 2024, and recently has become mature enough to start exploring the integration into software used by HEP experiments. In this contribution, we discuss the developments to support the features as required by the ATLAS analysis Event Data Model (EDM) in RNTuple, which will enable its integration into the Athena software framework. With these developments in place, we evaluate the performance of the current most recent versions of RNTuple-based ATLAS data sets and compare this to that of TTree
Parallel IO Libraries for Managing HEP Experimental Data
The computing and storage requirements of the energy and intensity frontiers will grow significantly during the Run 4 & 5 and the HL-LHC era. Similarly, in the intensity frontier, with larger trig ger readouts during supernovae explosions, the Deep Underground Neutrino Experiment (DUNE) will have unique computing challenges that could be addressed by the use of parallel and accelerated dataprocessing capabilities. Most of the requirements of the energy and intensity frontier experiments rely on increasing the role of high performance computing (HPC) in the HEP community. In this presentation, we will describe our ongoing efforts that are focused on using HPC resources for the next generation HEP experiments. The HEPCCE (High Energy Physics-Center for Computational Excellence) IOS (Input/Output and Storage) group has been developing approaches to map HEP data to the HDF5 , an IO library optimized for the HPC platforms to store the intermediate HEP data. The complex HEP data products are serialized using ROOT to allow for experiment independent general mapping approaches of the HEP data to the HDF5 format. The mapping approaches can be optimized for high performance parallel IO. Similarly, simpler data can be directly mapped into the HDF5, which can also be suitable for offloading into the GPUs directly. We will present our works on both complex and simple data model models
Framework for custom event sample augmentations for ATLAS analysis data
For HEP event processing, data is typically stored in column-wise synchronized containers, such as most prominently ROOT’s TTree, which have been used for several decades to store by now over 1 exabyte. These containers can combine row-wise association capabilities needed by most HEP event processing frameworks (e.g. Athena for ATLAS) with column-wise storage, which typically results in better compression and more efficient support for many analysis use-cases. One disadvantage is that these containers, TTree in the HEP use-case, require to contain the same attributes for each entry/row (representing events), which can make extending the list of attributes very costly in storage, even if those are only required for a small subsample of events. Since the initial design, the ATLAS software framework features powerful navigational infrastructure to allow storing custom data extensions for subsamples of events in separate, but synchronized containers. This allows adding event augmentations to ATLAS standard data products (such as DAOD-PHYS or PHYSLITE) avoiding duplication of those core data products, while limiting their size increase. For this functionality, the framework does not rely on any associations made by the I/O technology (i.e. ROOT), however it supports TTree friends and builds the associated index to allow for analysis outside of the ATLAS framework. A prototype based on the Long-Lived Particle search is implemented and preliminary results with this prototype will be presented. At this point, augmented data are stored within the same file as the core data. Storing them in separate files will be investigated in future, as this could provide more flexibility, e.g. certain sites may only want a subset of several augmentations or augmentations can be archived to tape once their analysis is complete
The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe
The preponderance of matter over antimatter in the early Universe, the
dynamics of the supernova bursts that produced the heavy elements necessary for
life and whether protons eventually decay --- these mysteries at the forefront
of particle physics and astrophysics are key to understanding the early
evolution of our Universe, its current state and its eventual fate. The
Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed
plan for a world-class experiment dedicated to addressing these questions. LBNE
is conceived around three central components: (1) a new, high-intensity
neutrino source generated from a megawatt-class proton accelerator at Fermi
National Accelerator Laboratory, (2) a near neutrino detector just downstream
of the source, and (3) a massive liquid argon time-projection chamber deployed
as a far detector deep underground at the Sanford Underground Research
Facility. This facility, located at the site of the former Homestake Mine in
Lead, South Dakota, is approximately 1,300 km from the neutrino source at
Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino
charge-parity symmetry violation and mass ordering effects. This ambitious yet
cost-effective design incorporates scalability and flexibility and can
accommodate a variety of upgrades and contributions. With its exceptional
combination of experimental configuration, technical capabilities, and
potential for transformative discoveries, LBNE promises to be a vital facility
for the field of particle physics worldwide, providing physicists from around
the globe with opportunities to collaborate in a twenty to thirty year program
of exciting science. In this document we provide a comprehensive overview of
LBNE's scientific objectives, its place in the landscape of neutrino physics
worldwide, the technologies it will incorporate and the capabilities it will
possess.Comment: Major update of previous version. This is the reference document for
LBNE science program and current status. Chapters 1, 3, and 9 provide a
comprehensive overview of LBNE's scientific objectives, its place in the
landscape of neutrino physics worldwide, the technologies it will incorporate
and the capabilities it will possess. 288 pages, 116 figure
Search for CP Violation in the Decay Z -> b (b bar) g
About three million hadronic decays of the Z collected by ALEPH in the years
1991-1994 are used to search for anomalous CP violation beyond the Standard
Model in the decay Z -> b \bar{b} g. The study is performed by analyzing
angular correlations between the two quarks and the gluon in three-jet events
and by measuring the differential two-jet rate. No signal of CP violation is
found. For the combinations of anomalous CP violating couplings, and , limits of \hat{h}_b < 0.59h^{\ast}_{b} < 3.02$ are given at 95\% CL.Comment: 8 pages, 1 postscript figure, uses here.sty, epsfig.st
Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector
A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements
Tau hadronic branching ratios
From 64492 selected \tau-pair events, produced at the Z^0 resonance, the measurement of the tau decays into hadrons from a global analysis using 1991, 1992 and 1993 ALEPH data is presented. Special emphasis is given to the reconstruction of photons and \pi^0's, and the removal of fake photons. A detailed study of the systematics entering the \pi^0 reconstruction is also given. A complete and consistent set of tau hadronic branching ratios is presented for 18 exclusive modes. Most measurements are more precise than the present world average. The new level of precision reached allows a stringent test of \tau-\mu universality in hadronic decays, g_\tau/g_\mu \ = \ 1.0013 \ \pm \ 0.0095, and the first measurement of the vector and axial-vector contributions to the non-strange hadronic \tau decay width: R_{\tau ,V} \ = \ 1.788 \ \pm \ 0.025 and R_{\tau ,A} \ = \ 1.694 \ \pm \ 0.027. The ratio (R_{\tau ,V} - R_{\tau ,A}) / (R_{\tau ,V} + R_{\tau ,A}), equal to (2.7 \pm 1.3) \ \%, is a measure of the importance of QCD non-perturbative contributions to the hadronic \tau decay widt
- …