701 research outputs found
Testing framework and monitoring system for the ATLAS EventIndex
The ATLAS EventIndex is a global catalogue of the events collected, processed or generated by the ATLAS experiment. The system was upgraded in advance of LHC Run 3, with a migration of the Run 1 and Run 2 data from HDFS MapFiles to HBase tables with a Phoenix interface. Two frameworks for testing functionality and performance of the new system have been developed. There are two types of tests running. First, the functional test that must check the correct functioning of the import chain. These tests run event picking over a random set of recently imported data to see if the data have been imported correctly, and can be accessed by both the CLI and the PanDA client. The second, the performance test, generates event lookup queries on sets of the EventIndex data and measures the response times. These tests enable studies of the response time dependence on the amount of requested data, and data sample type and size. Both types of tests run regularly on the existing system. The results of the regular tests as well as the statuses of the main EventIndex subsystems (services health, loaders status, filesystem usage, etc.) are sent to InfluxDB in JSON format via HTTP requests and are displayed on Grafana monitoring dashboards. In case (part of) the system misbehaves or becomes unresponsive, alarms are raised by the monitoring system
The ATLAS Eventindex using the HBase/Phoenix storage solution
The ATLAS EventIndex provides a global event catalogue and event-level metadata for ATLAS analysis groups and users. The LHC Run 3, starting in 2022, will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This talk describes the implementation of a new core storage service that will provide at least the same functionality as the current one for increased data ingestion and search rates, and with increasing volumes of stored data. It is based on a set of HBase tables, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, which allows the re-use of most of the existing code for metadata integration
Search for direct third-generation squark pair production in final states with missing transverse momentum and two b-jets in sâ=8s=8 TeV pp collisions with the ATLAS detector
The results of a search for pair production of supersymmetric partners of the Standard Model third-generation quarks are reported. This search uses 20.1 fb^â1 of pp collisions at sâ=8s=8 TeV collected by the ATLAS experiment at the Large Hadron Collider. The lightest bottom and top squarks ( bË1 and tË1 respectively) are searched for in a final state with large missing transverse momentum and two jets identified as originating from b-quarks. No excess of events above the expected level of Standard Model background is found. The results are used to set upper limits on the visible cross section for processes beyond the Standard Model. Exclusion limits at the 95 % confidence level on the masses of the third-generation squarks are derived in phenomenological supersymmetric R-parity-conserving models in which either the bottom or the top squark is the lightest squark. The bË1 is assumed to decay via bË1 â bÏ~01 and the tË1 via tË1 â bÏ~±1, with undetectable products of the subsequent decay of the ÏâŒÂ±1 due to the small mass splitting between the ÏâŒÂ±1 and the ÏâŒ01
Towards a new conditions data infrastructure in ATLAS
The ATLAS experiment is preparing a major change in the conditions data infrastructure in view of LHC Run 4. In this paper we describe the ongoing changes in the database architecture which have been implemented for Run 3, and describe the motivations and the on-going developments for the deployment of a new system (called CREST for Conditions Representational State Transfer, as a reference to REST architectures). The main goal is to set up a parallel infrastructure for full scale testing before the end of Run 3
The ATLAS EventIndex: a BigData catalogue for all ATLAS experiment events
The ATLAS EventIndex system comprises the catalogue of all events collected,
processed or generated by the ATLAS experiment at the CERN LHC accelerator, and
all associated software tools to collect, store and query this information.
ATLAS records several billion particle interactions every year of operation,
processes them for analysis and generates even larger simulated data samples; a
global catalogue is needed to keep track of the location of each event record
and be able to search and retrieve specific events for in-depth investigations.
Each EventIndex record includes summary information on the event itself and the
pointers to the files containing the full event. Most components of the
EventIndex system are implemented using BigData open-source tools. This paper
describes the architectural choices and their evolution in time, as well as the
past, current and foreseen future implementations of all EventIndex components.Comment: 21 page
ATLAS Run 2 searches for electroweak production of supersymmetric particles interpreted within the pMSSM
A summary of the constraints from searches performed by the ATLAS collaboration for the electroweak production of charginos and neutralinos is presented. Results from eight separate ATLAS searches are considered, each using 140 fbâ1 of proton-proton data at a centre-of-mass energy of â = 13 TeV collected at the Large Hadron Collider during its second data-taking run. The results are interpreted in the context of the 19-parameter phenomenological minimal supersymmetric standard model, where R-parity conservation is assumed and the lightest supersymmetric particle is assumed to be the lightest neutralino. Constraints from previous electroweak, flavour and dark matter related measurements are also considered. The results are presented in terms of constraints on supersymmetric particle masses and are compared with limits from simplified models. Also shown is the impact of ATLAS searches on parameters such as the dark matter relic density and the spin-dependent and spin-independent scattering cross-sections targeted by direct dark matter detection experiments. The Higgs boson and Z boson âfunnel regionsâ, where a low-mass neutralino would not oversaturate the dark matter relic abundance, are almost completely excluded by the considered constraints. Example spectra for non-excluded supersymmetric models with light charginos and neutralinos are also presented
Constraints on the Higgs boson self-coupling from single- and double-Higgs production with the ATLAS detector using pp collisions at âs=13 TeV
Constraints on the Higgs boson self-coupling are set by combining double-Higgs boson analyses in the bbÌ
bbÌ
, bbÌ
Ï+Ïâ and bbÌ
γγ decay channels with single-Higgs boson analyses targeting the γγ, ZZâ, WWâ, Ï+Ïâ and bbÌ
decay channels. The data used in these analyses were recorded by the ATLAS detector at the LHC in protonâproton collisions at âs = 13 TeV and correspond to an integrated luminosity of 126â139 fbâ1. The combination of the double-Higgs analyses sets an upper limit of
ÎŒHH <2.4 at 95% confidence level on the double-Higgs production cross-section normalised to its Standard Model prediction. Combining the single-Higgs and double-Higgs analyses, with the assumption
that new physics affects only the Higgs boson self-coupling (λHHH ), values outside the interval â0.4 < Îșλ = (λHHH /λSMHHH) < 6.3 are excluded at 95% confidence level. The combined single-Higgs and double-Higgs analyses provide results with fewer assumptions, by adding in the fit more coupling modifiers introduced to account for the Higgs boson interactions with the other Standard Model particles. In this relaxed scenario, the constraint becomes â1.4 < Îșλ < 6.1 at 95% CL
Measurements of observables sensitive to colour reconnection in ÂŻ events with the ATLAS detector at â = 13 TeV
A measurement of observables sensitive to effects of colour reconnection in top-quark pair-production events is presented using 139 fbâ1 of 13 TeV protonâproton collision data collected by the ATLAS detector at the LHC. Events are selected by requiring exactly one isolated electron and one isolated muon with opposite charge and two or three jets, where exactly two jets are required to be b-tagged. For the selected events, measurements are presented for the charged-particle multiplicity, the scalar sum of the transverse momenta of the charged particles, and the same scalar sum in bins of charged-particle multiplicity. These observables are unfolded to the stable-particle level, thereby correcting for migration effects due to finite detector resolution, acceptance and efficiency effects. The particle-level measurements are compared with different colour reconnection models in Monte Carlo generators. These measurements disfavour some of the colour reconnection models and provide inputs to future optimisation of the parameters in Monte Carlo generators
ATLAS flavour-tagging algorithms for the LHC Run 2 pp collision dataset
The flavour-tagging algorithms developed by the ATLAS Collaboration and used to analyse its dataset of âs = 13 TeV pp collisions from Run 2 of the Large Hadron Collider are presented. These new tagging algorithms are based on recurrent and deep neural networks, and their performance is evaluated in simulated collision events. These developments yield considerable improvements over previous jet-flavour identification strategies. At the 77% b-jet identification efficiency operating point, light-jet (charm-jet) rejection factors of 170 (5) are achieved in a sample of simulated Standard Model ttÂŻ events; similarly, at a c-jet identification efficiency of 30%, a light-jet (b-jet) rejection factor of 70 (9) is obtained
Deployment and Operation of the ATLAS EventIndex for LHC Run 3
The ATLAS Eventlndex is the global catalogue of all ATLAS real and simulated events. During the LHC long shutdown between Run 2 (20152018) and Run 3 (2022-2025) all its components were substantially revised and a new system was deployed for the start of Run 3 in Spring 2022. The new core storage system, based on HBase tables with a SQL interface provided by Phoenix, allows much faster data ingestion rates and scales much better than the old one to the data rates expected for the end of Run 3 and beyond. All user interfaces were also revised and a new command-line interface and web services were also deployed. The new system was initially populated with all existing data relative to Run 1 and Run 2 datasets, and then put online to receive Run 3 data in real time. After extensive testing, the old system, which ran in parallel to the new one for a few months, was finally switched off in October 2022. This paper describes the new system, the move of all existing data from the old to the new storage schemas and the operational experience gathered so far
- âŠ