29 research outputs found

    Modelling Z → ττ processes in ATLAS with τ-embedded Z → μμ data

    Get PDF
    This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z→ττ decays. In Z→μμ events selected from proton-proton collision data recorded at √s=8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by τ leptons from simulated Z→ττ decays at the level of reconstructed tracks and calorimeter cells. The τ lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and τ leptons as well as the detector response to the τ decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called τ-embedding method is particularly relevant for Higgs boson searches and analyses in ττ final states, where Zarrowττ decays constitute a large irreducible background that cannot be obtained directly from data control samples. In this paper, the relevant concepts are discussed based on the implementation used in the ATLAS Standard Model H→ττ analysis of the full datataset recorded during 2011 and 2012

    Measurement of differential cross sections of isolated-photon plus heavy-flavour jet production in pp collisions at √s=8 TeV using the ATLAS detector

    Get PDF
    This Letter presents the measurement of differential cross sections of isolated prompt photons produced in association with a b-jet or a c-jet. These final states provide sensitivity to the heavy-flavour content of the proton and aspects related to the modelling of heavy-flavour quarks in perturbative QCD. The measurement uses proton–proton collision data at a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the LHC in 2012 corresponding to an integrated luminosity of up to 20.2 fb−1. The differential cross sections are measured for each jet flavour with respect to the transverse energy of the leading photon in two photon pseudorapidity regions: |ηγ | < 1.37 and 1.56 < |ηγ | < 2.37. The measurement covers photon transverse energies 25 < Eγ T < 400 GeV and 25 < Eγ T < 350 GeV respectively for the two |ηγ | regions. For each jet flavour, the ratio of the cross sections in the two |ηγ | regions is also measured. The measurement is corrected for detector effects and compared to leading-order and nextto-leading-order perturbative QCD calculations, based on various treatments and assumptions about the heavy-flavour content of the proton. Overall, the predictions agree well with the measurement, but some deviations are observed at high photon transverse energies. The total uncertainty in the measurement ranges between 13% and 66%, while the central γ + b measurement exhibits the smallest uncertainty, ranging from 13% to 27%, which is comparable to the precision of the theoretical predictions

    Identification and rejection of pile-up jets at high pseudorapidity with the ATLAS detector

    Get PDF
    The rejection of forward jets originating from additional proton–proton interactions (pile-up) is crucial for a variety of physics analyses at the LHC, including Standard Model measurements and searches for physics beyond the Standard Model. The identification of such jets is challenging due to the lack of track and vertex information in the pseudorapidity range |η| &gt; 2.5. This paper presents a novel strategy for forward pile-up jet tagging that exploits jet shapes and topological jet correlations in pile-up interactions. Measurements of the per-jet tagging efficiency are presented using a data set of 3.2 fb−1 of proton–proton collisions at a centre-of-mass energy of 13 TeV collected with the ATLAS detector. The fraction of pile-up jets rejected in the range 2.5 &lt; |η| &lt; 4.5 is estimated in simulated events with an average of 22 interactions per bunch-crossing. It increases with jet transverse momentum and, for jets with transverse momentum between 20 and 50 GeV, it ranges between 49% and 67% with an efficiency of 85% for selecting hard-scatter jets. A case study is performed in Higgs boson production via the vector-boson fusion process, showing that these techniques mitigate the background growth due to additional proton–proton interactions, thus enhancing the reach for such signatures

    ‘One Size Does Not Fit All’: A Roadmap of Purpose-Driven Mixed-Method Pathways for Sensitivity Analysis of Agent-Based Models

    No full text
    Designing, implementing, and applying agent-based models (ABMs) requires a structured approach, part of which is a comprehensive analysis of the output to input variability in the form of uncertainty and sensitivity analysis (SA). The objective of this paper is to assist in choosing, for a given ABM, the most appropriate methods of SA. We argue that no single SA method fits all ABMs and that different methods of SA should be used based on the overarching purpose of the model. For example, abstract exploratory models that focus on deeper understanding of the target system and its properties are fed with only the most critical data representing patterns or stylized facts. For them, simple SA methods may be sufficient in capturing the dependencies between the output-input spaces. In contrast, applied models used in scenario and policy-analysis are usually more complex and data-rich because a higher level of realism is required. Here the choice of a more sophisticated SA may be critical in establishing the robustness of the results before the model (or its results) can be passed on to end-users. Accordingly, we present a roadmap that guides ABM developers through the process of performing SA that best fits the purpose of their ABM. This roadmap covers a wide range of ABM applications and advocates for the routine use of global methods that capture input interactions and are, therefore, mandatory if scientists want to recognize all sensitivities. As part of this roadmap, we report on frontier SA methods emerging in recent years: a) handling temporal and spatial outputs, b) using the whole output distribution of a result rather than its variance, c) looking at topological relationships between input data points rather than their values, and d) looking into the ABM black box âĂŞ– finding behavioral primitives and using them to study complex system characteristics like regime shifts, tipping points, and condensation versus dissipation of collective system behavior. © 2020, University of Surrey. All rights reserved
    corecore