28 research outputs found

    Application of Deep Learning techniques in the search for BSM Higgs bosons in the ΌΌ\mu\mu final state in CMS

    Get PDF
    The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons H±H^{\pm}, two neutral scalars hh and HH, and one pseudoscalar AA. The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of 35.9 fb−135.9\ \text{fb}^{-1}. Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a bbˉ\text{b}\bar{\text{b}} quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic ϕ\phi boson decaying into a muon pair in the 130 to 1000 GeV range

    CEPC Technical Design Report -- Accelerator (v2)

    Full text link
    The Circular Electron Positron Collider (CEPC) is a large scientific project initiated and hosted by China, fostered through extensive collaboration with international partners. The complex comprises four accelerators: a 30 GeV Linac, a 1.1 GeV Damping Ring, a Booster capable of achieving energies up to 180 GeV, and a Collider operating at varying energy modes (Z, W, H, and ttbar). The Linac and Damping Ring are situated on the surface, while the Booster and Collider are housed in a 100 km circumference underground tunnel, strategically accommodating future expansion with provisions for a Super Proton Proton Collider (SPPC). The CEPC primarily serves as a Higgs factory. In its baseline design with synchrotron radiation (SR) power of 30 MW per beam, it can achieve a luminosity of 5e34 /cm^2/s^1, resulting in an integrated luminosity of 13 /ab for two interaction points over a decade, producing 2.6 million Higgs bosons. Increasing the SR power to 50 MW per beam expands the CEPC's capability to generate 4.3 million Higgs bosons, facilitating precise measurements of Higgs coupling at sub-percent levels, exceeding the precision expected from the HL-LHC by an order of magnitude. This Technical Design Report (TDR) follows the Preliminary Conceptual Design Report (Pre-CDR, 2015) and the Conceptual Design Report (CDR, 2018), comprehensively detailing the machine's layout and performance, physical design and analysis, technical systems design, R&D and prototyping efforts, and associated civil engineering aspects. Additionally, it includes a cost estimate and a preliminary construction timeline, establishing a framework for forthcoming engineering design phase and site selection procedures. Construction is anticipated to begin around 2027-2028, pending government approval, with an estimated duration of 8 years. The commencement of experiments could potentially initiate in the mid-2030s.Comment: 1106 page

    Distributed processing of large remote sensing images using MapReduce - A case of Edge Detection

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Advances in sensor technology and their ever increasing repositories of the collected data are revolutionizing the mechanisms remotely sensed data are collected, stored and processed. This exponential growth of data archives and the increasing user’s demand for real-and near-real time remote sensing data products has pressurized remote sensing service providers to deliver the required services. The remote sensing community has recognized the challenge in processing large and complex satellite datasets to derive customized products. To address this high demand in computational resources, several efforts have been made in the past few years towards incorporation of high-performance computing models in remote sensing data collection, management and analysis. This study adds an impetus to these efforts by introducing the recent advancements in distributed computing technologies, MapReduce programming paradigm, to the area of remote sensing. The MapReduce model which is developed by Google Inc. encapsulates the efforts of distributed computing in a highly simplified single library. This simple but powerful programming model can provide us distributed environment without having deep knowledge of parallel programming. This thesis presents a MapReduce based processing of large satellite images a use case scenario of edge detection methods. Deriving from the conceptual massive remote sensing image processing applications, a prototype of edge detection methods was implemented on MapReduce framework using its open-source implementation, the Apache Hadoop environment. The experiences of the implementation of the MapReduce model of Sobel, Laplacian, and Canny edge detection methods are presented. This thesis also presents the results of the evaluation the effect of parallelization using MapReduce on the quality of the output and the execution time performance tests conducted based on various performance metrics. The MapReduce algorithms were executed on a test environment on heterogeneous cluster that supports the Apache Hadoop open-source software. The successful implementation of the MapReduce algorithms on a distributed environment demonstrates that MapReduce has a great potential for scaling large-scale remotely sensed images processing and perform more complex geospatial problems

    Measurement of the Triple-Differential Cross-Section for the Production of Multijet Events using 139 fb^{-1} of Proton-Proton Collision Data at \sqrt{s} = 13 TeV with the ATLAS Detector to Disentangle Quarks and Gluons at the Large Hadron Collider

    Get PDF
    At hadron-hadron colliders, it is almost impossible to obtain pure samples in either quark- or gluon-initialized hadronic showers as one always deals with a mixture of particle jets. The analysis presented in this dissertation aims to break the aforementioned degeneracy by extracting the underlying fractions of (light) quarks and gluons through a measurement of the relative production rates of multijet events. A measurement of the triple-differential multijet cross section at a centre-of-mass energy of 13 TeV using an integrated luminosity of 139 fb −1 of data collected with the ATLAS detector in proton-proton collisions at the Large Hadron Collider (LHC) is presented. The cross section is measured as a function of the transverse momentum p T , two categories of pseudorapidity η rel defined by the relative orientation between the jets, as well as a Jet Sub-Structure (JSS) observable O JSS , sensitive to the quark- or gluon-like nature of the hadronic shower of the two leading-p T jets with 250 GeV < p T < 4.5 TeV and |η| < 2.1 in the event. The JSS variables, which have been studied within the context of this thesis, can broadly be divided into two categories: one set of JSS observables is constructed by iteratively declustering and counting the jet’s charged constituents; the second set is based on the output predicted by Deep Neural Networks (DNNs) derived from the “deep sets” paradigm to implement permutation invariant functions over sets, which are trained to discriminate between quark- and gluon- initialized showers in a supervised fashion. All JSS observables are measured based on Inner Detector tracks with p T > 500 MeV and |η| < 2.5 to maintain strong correlations between detector- and particle-level objects. The reconstructed spectra are fully corrected for acceptance and detector effects, and the unfolded cross section is compared to various state-of-the-art parton shower Monte Carlo models. Several sources of systematic and statistical uncertainties are taken into account that are fully propagated through the entire unfolding procedure onto the final cross section. The total uncertainty on the cross section varies between 5 % and 20 % depending on the region of phase space. The unfolded multi-differential cross sections are used to extract the underlying fractions and probability distributions of quark- and gluon-initialized jets in a solely data-driven, model- independent manner using a statistical demixing procedure (“jet topics”), which has originally been developed as a tool for extracting emergent themes in an extensive corpus of text-based documents. The obtained fractions are model-independent and are based on an operational definition of quark and gluon jets that does not seek to assign a binary label on a jet-to-jet basis, but rather identifies quark- and gluon-related features on the level of individual distributions, avoiding common theoretical and conceptional pitfalls regarding the definition of quark and gluon jets. The total fraction of gluon-initialized jets in the multijet sample is (IRC-safely) measured to be 60.5 ± 0.4(Stat) ⊕ 2.4(Syst) % and 52.3 ± 0.4(Stat) ⊕ 2.6(Syst) % in central and forward region, respectively. Furthermore, the gluon fractions are extracted in several exclusive regions of transverse momentum
    corecore