15 research outputs found

    Differentiable Simulation of a Liquid Argon Time Projection Chamber

    Full text link
    Liquid argon time projection chambers (LArTPCs) are widely used in particle detection for their tracking and calorimetric capabilities. The particle physics community actively builds and improves high-quality simulators for such detectors in order to develop physics analyses in a realistic setting. The fidelity of these simulators relative to real, measured data is limited by the modeling of the physical detectors used for data collection. This modeling can be improved by performing dedicated calibration measurements. Conventional approaches calibrate individual detector parameters or processes one at a time. However, the impact of detector processes is entangled, making this a poor description of the underlying physics. We introduce a differentiable simulator that enables a gradient-based optimization, allowing for the first time a simultaneous calibration of all detector parameters. We describe the procedure of making a differentiable simulator, highlighting the challenges of retaining the physics quality of the standard, non-differentiable version while providing meaningful gradient information. We further discuss the advantages and drawbacks of using our differentiable simulator for calibration. Finally, we provide a starting point for extensions to our approach, including applications of the differentiable simulator to physics analysis pipelines

    Implicit Neural Representation as a Differentiable Surrogate for Photon Propagation in a Monolithic Neutrino Detector

    Full text link
    Optical photons are used as signal in a wide variety of particle detectors. Modern neutrino experiments employ hundreds to tens of thousands of photon detectors to observe signal from millions to billions of scintillation photons produced from energy deposition of charged particles. These neutrino detectors are typically large, containing kilotons of target volume, with different optical properties. Modeling individual photon propagation in form of look-up table requires huge computational resources. As the size of a table increases with detector volume for a fixed resolution, this method scales poorly for future larger detectors. Alternative approaches such as fitting a polynomial to the model could address the memory issue, but results in poorer performance. Both look-up table and fitting approaches are prone to discrepancies between the detector simulation and the data collected. We propose a new approach using SIREN, an implicit neural representation with periodic activation functions, to model the look-up table as a 3D scene and reproduces the acceptance map with high accuracy. The number of parameters in our SIREN model is orders of magnitude smaller than the number of voxels in the look-up table. As it models an underlying functional shape, SIREN is scalable to a larger detector. Furthermore, SIREN can successfully learn the spatial gradients of the photon library, providing additional information for downstream applications. Finally, as SIREN is a neural network representation, it is differentiable with respect to its parameters, and therefore tunable via gradient descent. We demonstrate the potential of optimizing SIREN directly on real data, which mitigates the concern of data vs. simulation discrepancies. We further present an application for data reconstruction where SIREN is used to form a likelihood function for photon statistics

    Sanctions and Democratization in the Post-Cold War Era

    Full text link

    Searches for pair production of Higgs bosons in the bbˉbbˉb\bar{b}b\bar{b} final state using the ATLAS detector, or: How I Learned to Stop Worrying and Love the QCD Background

    No full text
    Thesis (Ph.D.)--University of Washington, 2021This thesis discusses searches for pair production of Higgs bosons in the bbˉbbˉb\bar{b}b\bar{b} final state using data recorded by the ATLAS detector from s=13\sqrt{s} = 13 TeV proton-proton (pppp) collisions during the full period of LHC Run 2. It develops two separate analysis strategies: one targeting resonant pair production of Higgs bosons, in which Beyond the Standard Model resonances are produced which subsequently decay to two Higgs bosons, and one targeting non-resonant pair production of Higgs bosons, which is sensitive to Standard Model HHHH production as well as to variations of the Higgs trilinear self-coupling. In the resonant searches, no significant excesses are seen, and upper limits on cross section are set on both spin-0 and spin-2 resonant hypotheses. Such limits are competitive with other leading ATLAS full Run 2 searches, and represent a significantly stronger statement than previous, beating the early Run 2 combined ATLAS results above 350 GeV and leading the ATLAS full Run 2 sensitivity above 700 GeV. In the non-resonant, no evidence of Standard Model HHHH production is seen, but upper limits on cross section of ppHHpp\rightarrow HH via gluon-gluon fusion are set to be 4.4 (5.9) observed (expected) times the value predicted by the Standard Model. Such limits represent a significant improvement in sensitivity over the early Run 2 bbˉbbˉb\bar{b}b\bar{b} results, achieving a 30 (40)~\% additional gain in sensitivity beyond that predicted from a pure increase in dataset size. These limits are competitive with other leading ATLAS full Run 2 searches. Cross section limits are additionally set for a range of values of the Higgs self coupling, parametrized via its ratio to the value predicted by the Standard Model, κλ=λHHH/λHHHSM\kappa_{\lambda} = \lambda_{HHH}/\lambda_{HHH}^{SM}. This is restricted to have values 4.9κλ14.4-4.9 \leq \kappa_{\lambda} \leq 14.4 observed (3.9κλ10.9-3.9 \leq \kappa_{\lambda} \leq 10.9 expected). An excess of data over background is seen for values of κλ5\kappa_{\lambda} \geq 5, with maximum local significance of 3.8σ3.8\sigma at κλ=6\kappa_{\lambda} = 6. Such an excess is demonstrated to be due to low mass, where the bbˉbbˉb\bar{b}b\bar{b} channel has limited sensitivity, and is not seen in more sensitive channels in this region. Results on the development of two methods for the improvement of hadronic shower modeling in ATLAS fast calorimeter simulation are also presented

    Searches for Higgs boson pair production with the full LHC Run 2 dataset in ATLAS

    No full text
    The latest results on the production of Higgs boson pairs (HH) in the ATLAS experiment are reported, with emphasis on searches based on the full LHC Run-2 dataset at 13 TeV. In the case of non-resonant HH searches, results are interpreted both in terms of sensitivity to the Standard Model and as limits on kappa_lambda, i.e. a modifier of the Higgs boson self-coupling strength. Searches for new resonances decaying into pairs of Higgs bosons are also reported. Prospects of testing the Higgs boson self-coupling at the High Luminosity LHC (HL-LHC) will also be presented

    Fast Calorimeter Simulation in ATLAS

    No full text
    FastCaloSim poster for CHE

    Fast Calorimeter Simulation in ATLAS: FastCaloSimV2

    No full text
    The ATLAS physics program relies on very large samples of GEANT4 simulated events, which provide a highly detailed and accurate simulation of the ATLAS detector. But this accuracy comes with a high price in CPU, predominantly caused by the calorimeter simulation. The sensitivity of many physics analyses is already limited by the available Monte Carlo statistics and will be even more so in the future. Therefore, sophisticated fast simulation tools are developed. The calorimeter shower simulation of most samples in Run-3 will be based on a new parametrized description of longitudinal and lateral energy deposits (FastCaloSimV2). FastCaloSimV2 includes machine learning approaches to achieve a fast and accurate description, and to ensure its applicability to a broad variety of physics cases. In this talk, we will describe this new tool, focussing on the modelling of hadronic showers, and demonstrate its potential for physics applications

    Fast Calorimeter Simulation in ATLAS

    Get PDF
    The ATLAS physics program at the LHC relies on very large samples of simulated events. Most of these samples are produced with Geant4, which provides a highly detailed and accurate simulation of the ATLAS detector. However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analyses is already limited by the available Monte Carlo statistics and will be even more so in the future as datasets grow. To solve this problem, sophisticated fast simulation tools are developed, and they will become the default tools in ATLAS production in Run 3 and beyond. The slowest component is the simulation of the calorimeter showers. Those are replaced by a new parametrised description of the longitudinal and lateral energy deposits, including machine learning approaches, achieving a fast but accurate description. In this talk we will describe the new tool for fast calorimeter simulation that has been developed by ATLAS, review its technical and physics performance, and demonstrate its potential to transform physics analyses

    Fast Calorimeter Simulation in ATLAS

    No full text
    The ATLAS physics program at the LHC relies on very large samples of simulated events. Most of these samples are produced with Geant4, which provides a highly detailed and accurate simulation of the ATLAS detector. However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analyses is already limited by the available Monte Carlo statistics and will be even more so in the future as datasets grow. To solve this problem, sophisticated fast simulation tools are developed, and they will become the default tools in ATLAS production in Run 3 and beyond. The slowest component is the simulation of the calorimeter showers. Those are replaced by a new parametrised description of the longitudinal and lateral energy deposits, including machine learning approaches, achieving a fast but accurate description. In this talk we will describe the new tool for fast calorimeter simulation that has been developed by ATLAS, review its technical and physics performance, and demonstrate its potential to transform physics analyses

    Fast Simulation in ATLAS

    No full text
    The ATLAS physics program relies on very large samples of simulated events. Most of these samples are produced with GEANT4, which provides a highly detailed and accurate simulation of the ATLAS detector. However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analysis is already limited by the available Monte Carlo statistics and will be even more so in the future as datasets grow. To solve this problem, sophisticated fast simulation tools are developed and they will become the default tools in ATLAS production in Run-3 and beyond. The slowest component is the simulation of the calorimeter showers. Those are replaced by a new parametrised description of the longitudinal and lateral energy deposits, including machine learning approaches, achieving a fast but accurate description. Other fast simulation tools replace the inner detector simulation, as well as digitization and reconstruction algorithms, achieving up to two orders of magnitude improvement in speed. In this talk we will describe the new tools for fast simulation that have been developed by ATLAS, review their technical and physics performance, and demonstrate their potential to transform physics analyses
    corecore