116 research outputs found
GAN-AE : An anomaly detection algorithm for New Physics search in LHC data
In recent years, interest has grown in alternative strategies for the search
for New Physics beyond the Standard Model. One envisaged solution lies in the
development of anomaly detection algorithms based on unsupervised machine
learning techniques. In this paper, we propose a new Generative Adversarial
Network-based auto-encoder model that allows both anomaly detection and
model-independent background modeling. This algorithm can be integrated with
other model-independent tools in a complete heavy resonance search strategy.
The proposed strategy has been tested on the LHC Olympics 2020 dataset with
promising results.Comment: 10 pages, 8 figure
pyBumpHunter: A model independent bump hunting tool in Python for High Energy Physics analyses
The BumpHunter algorithm is widely used in the search for new particles in
High Energy Physics analysis. This algorithm offers the advantage of evaluating
the local and global p-values of a localized deviation in the observed data
without making any hypothesis on the supposed signal. The increasing popularity
of the Python programming language motivated the development of a new public
implementation of this algorithm in Python, called pyBumpHunter, together with
several improvements and additional features. It is the first public
implementation of the BumpHunter algorithm to be added to Scikit-HEP. This
paper presents in detail the BumpHunter algorithm as well as all the features
proposed in this implementation. All these features have been tested in order
to demonstrate their behaviour and performance.Comment: 14 pages, 9 figure
Energy Calibration of b-Quark Jets with Z->b-bbar Decays at the Tevatron Collider
The energy measurement of jets produced by b-quarks at hadron colliders
suffers from biases due to the peculiarities of the hadronization and decay of
the originating B hadron. The impact of these effects can be estimated by
reconstructing the mass of Z boson decays into pairs of b-quark jets. From a
sample of 584 pb-1 of data collected by the CDF experiment in 1.96 TeV
proton-antiproton collisions at the Tevatron collider, we show how the Z signal
can be identified and measured. Using the reconstructed mass of Z candidates we
determine a jet energy scale factor for b-quark jets with a precision better
than 2%. This measurement allows a reduction of one of the dominant source of
uncertainty in analyses based on high transverse momentum b-quark jets. We also
determine, as a cross-check of our analysis, the Z boson cross section in
hadronic collisions using the b-bbar final state as sigma x B(Z->b-bbar) = 1578
+636 -410 pb.Comment: 35 pages, 9 figures, submitted to Nuclear Instruments and Methods in
Physics Research Section
Report from Working Group 3: Beyond the standard model physics at the HL-LHC and HE-LHC
This is the third out of five chapters of the final report [1] of the Workshop on Physics at HL-LHC, and perspectives on HE-LHC [2]. It is devoted to the study of the potential, in the search for Beyond the Standard Model (BSM) physics, of the High Luminosity (HL) phase of the LHC, defined as ab of data taken at a centre-of-mass energy of 14 TeV, and of a possible future upgrade, the High Energy (HE) LHC, defined as ab of data at a centre-of-mass energy of 27 TeV. We consider a large variety of new physics models, both in a simplified model fashion and in a more model-dependent one. A long list of contributions from the theory and experimental (ATLAS, CMS, LHCb) communities have been collected and merged together to give a complete, wide, and consistent view of future prospects for BSM physics at the considered colliders. On top of the usual standard candles, such as supersymmetric simplified models and resonances, considered for the evaluation of future collider potentials, this report contains results on dark matter and dark sectors, long lived particles, leptoquarks, sterile neutrinos, axion-like particles, heavy scalars, vector-like quarks, and more. Particular attention is placed, especially in the study of the HL-LHC prospects, to the detector upgrades, the assessment of the future systematic uncertainties, and new experimental techniques. The general conclusion is that the HL-LHC, on top of allowing to extend the present LHC mass and coupling reach by on most new physics scenarios, will also be able to constrain, and potentially discover, new physics that is presently unconstrained. Moreover, compared to the HL-LHC, the reach in most observables will, generally more than double at the HE-LHC, which may represent a good candidate future facility for a final test of TeV-scale new physics
Analysis of the Cherenkov Telescope Array first Large Size Telescope real data using convolutional neural networks
The Cherenkov Telescope Array (CTA) is the future ground-based gamma-ray observatory and will be composed of two arrays of imaging atmospheric Cherenkov telescopes (IACTs) located in the Northern and Southern hemispheres respectively. The first CTA prototype telescope built on-site, the Large-Sized Telescope (LST-1), is under commissioning in La Palma and has already taken data on numerous known sources. IACTs detect the faint flash of Cherenkov light indirectly produced after a very energetic gamma-ray photon has interacted with the atmosphere and generated an atmospheric shower. Reconstruction of the characteristics of the primary photons is usually done using a parameterization up to the third order of the light distribution of the images. In order to go beyond this classical method, new approaches are being developed using state-of-the-art methods based on convolutional neural networks (CNN) to reconstruct the properties of each event (incoming direction, energy and particle type) directly from the telescope images. While promising, these methods are notoriously difficult to apply to real data due to differences (such as different levels of night sky background) between Monte Carlo (MC) data used to train the network and real data. The GammaLearn project, based on these CNN approaches, has already shown an increase in sensitivity on MC simulations for LST-1 as well as a lower energy threshold. This work applies the GammaLearn network to real data acquired by LST-1 and compares the results to the classical approach that uses random forests trained on extracted image parameters. The improvements on the background rejection, event direction, and energy reconstruction are discussed in this contribution
Status and results of the prototype LST of CTA
The Large-Sized Telescopes (LSTs) of Cherenkov Telescope Array (CTA) are designed for gamma-ray studies focusing on low energy threshold, high flux sensitivity, rapid telescope repositioning speed and a large field of view. Once the CTA array is complete, the LSTs will be dominating the CTA performance between 20 GeV and 150 GeV. During most of the CTA Observatory construction phase, however, the LSTs will be dominating the array performance until several TeVs. In this presentation we will report on the status of the LST-1 telescope inaugurated in La Palma, Canary islands, Spain in 2018. We will show the progress of the telescope commissioning, compare the expectations with the achieved performance, and give a glance of the first physics results
Development of an advanced SiPM camera for the Large Size Telescope of the Cherenkov TelescopeArray Observatory
Silicon photomultipliers (SiPMs) have become the baseline choice for cameras of the small-sized telescopes (SSTs) of the Cherenkov Telescope Array (CTA).
On the other hand, SiPMs are relatively new to the field and covering large surfaces and operating at high data rates still are challenges to outperform photomultipliers (PMTs). The higher sensitivity in the near infra-red and longer signals compared to PMTs result in higher night sky background rate for SiPMs. However, the robustness of the SiPMs represents a unique opportunity to ensure long-term operation with low maintenance and better duty cycle than PMTs. The proposed camera for large size telescopes will feature 0.05 degree pixels, low power and fast front-end electronics and a fully digital readout. In this work, we present the status of dedicated simulations and data analysis for the performance estimation. The design features and the different strategies identified, so far, to tackle the demanding requirements and the improved performance are described
Reconstruction of extensive air shower images of the Large Size Telescope prototype of CTA using a novel likelihood technique
Ground-based gamma-ray astronomy aims at reconstructing the energy and direction of gamma rays from the extensive air showers they initiate in the atmosphere. Imaging Atmospheric Cherenkov Telescopes (IACT) collect the Cherenkov light induced by secondary charged particles in extensive air showers (EAS), creating an image of the shower in a camera positioned in the focal plane of optical systems. This image is used to evaluate the type, energy and arrival direction of the primary particle that initiated the shower. This contribution shows the results of a novel reconstruction method based on likelihood maximization. The novelty with respect to previous likelihood reconstruction methods lies in the definition of a likelihood per single camera pixel, accounting not only for the total measured charge, but also for its development over time. This leads to more precise reconstruction of shower images. The method is applied to observations of the Crab Nebula acquired with the Large Size Telescope prototype (LST-1) deployed at the northern site of the Cherenkov Telescope Array
Commissioning of the camera of the first Large Size Telescope of the Cherenkov Telescope Array
The first Large Size Telescope (LST-1) of the Cherenkov Telescope Array has been operational since October 2018 at La Palma, Spain. We report on the results obtained during the camera commissioning. The noise level of the readout is determined as a 0.2 p.e. level. The gain of PMTs are well equalized within 2% variation, using the calibration flash system. The effect of the night sky background on the signal readout noise as well as the PMT gain estimation are also well evaluated. Trigger thresholds are optimized for the lowest possible gamma-ray energy threshold and the trigger distribution synchronization has been achieved within 1 ns precision. Automatic rate control realizes the stable observation with 1.5% rate variation over 3 hours. The performance of the novel DAQ system demonstrates a less than 10% dead time for 15 kHz trigger rate even with sophisticated online data correction
First follow-up of transient events with the CTA Large Size Telescope prototype
When very-high-energy gamma rays interact high in the Earth’s atmosphere, they produce cascades of particles that induce flashes of Cherenkov light. Imaging Atmospheric Cherenkov Telescopes (IACTs) detect these flashes and convert them into shower images that can be analyzed to extract the properties of the primary gamma ray. The dominant background for IACTs is comprised of air shower images produced by cosmic hadrons, with typical noise-to-signal ratios of several orders of magnitude. The standard technique adopted to differentiate between images initiated by gamma rays and those initiated by hadrons is based on classical machine learning algorithms, such as Random Forests, that operate on a set of handcrafted parameters extracted from the images. Likewise, the inference of the energy and the arrival direction of the primary gamma ray is performed using those parameters. State-of-the-art deep learning techniques based on convolutional neural networks (CNNs) have the potential to enhance the event reconstruction performance, since they are able to autonomously extract features from raw images, exploiting the pixel-wise information washed out during the parametrization process.
Here we present the results obtained by applying deep learning techniques to the reconstruction of Monte Carlo simulated events from a single, next-generation IACT, the Large-Sized Telescope (LST) of the Cherenkov Telescope Array (CTA). We use CNNs to separate the gamma-ray-induced events from hadronic events and to reconstruct the properties of the former, comparing their performance to the standard reconstruction technique. Three independent implementations of CNN-based event reconstruction models have been utilized in this work, producing consistent results
- …