28 research outputs found
Recommended from our members
A distributed analysis and monitoring framework for the compact Muon solenoid experiment and a pedestrian simulation
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The design of a parallel and distributed computing system is a very complicated task. It requires a detailed understanding of the design issues and of the theoretical and practical aspects of their solutions. Firstly, this thesis discusses in detail the major concepts and components required to make parallel and distributed computing a reality. A multithreaded and distributed framework capable of analysing the simulation data produced by a pedestrian simulation software was developed. Secondly, this thesis discusses the origins and fundamentals of Grid computing and the motivations for its use in High Energy Physics. Access to the data produced by the Large Hadron Collider (LHC) has to be provided for more than five thousand scientists all over the world. Users who run analysis jobs on the Grid do not necessarily have expertise in Grid computing. Simple, userfriendly and reliable monitoring of the analysis jobs is one of the key components of the operations of the distributed analysis; reliable monitoring is one of the crucial components of the Worldwide LHC Computing Grid for providing the functionality and performance that is required by the LHC experiments. The CMS Dashboard Task Monitoring and the CMS Dashboard Job Summary monitoring applications were developed to serve the needs of the CMS community
Recommended from our members
ReSCon '10, Research Student Conference: Book of Abstracts
The third SED Research Student Conference (ReSCon2010) was hosted over three days, 21-23 June 2010, in the Hamilton Centre at Brunel University. The conference consisted of oral and poster presentations, which showcased the high quality and diversity of the research being conducted within the School of Engineering and Design. The abstracts and presentations were the result of ongoing research by postgraduate research students from the School. The conference is held annually, and ReSCon plays a key role in contributing to research and innovations within the School
Application of Deep Learning techniques in the search for BSM Higgs bosons in the final state in CMS
The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons , two neutral scalars and , and one pseudoscalar . The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of . Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic boson decaying into a muon pair in the 130 to 1000 GeV range
CEPC Technical Design Report -- Accelerator (v2)
The Circular Electron Positron Collider (CEPC) is a large scientific project
initiated and hosted by China, fostered through extensive collaboration with
international partners. The complex comprises four accelerators: a 30 GeV
Linac, a 1.1 GeV Damping Ring, a Booster capable of achieving energies up to
180 GeV, and a Collider operating at varying energy modes (Z, W, H, and ttbar).
The Linac and Damping Ring are situated on the surface, while the Booster and
Collider are housed in a 100 km circumference underground tunnel, strategically
accommodating future expansion with provisions for a Super Proton Proton
Collider (SPPC). The CEPC primarily serves as a Higgs factory. In its baseline
design with synchrotron radiation (SR) power of 30 MW per beam, it can achieve
a luminosity of 5e34 /cm^2/s^1, resulting in an integrated luminosity of 13 /ab
for two interaction points over a decade, producing 2.6 million Higgs bosons.
Increasing the SR power to 50 MW per beam expands the CEPC's capability to
generate 4.3 million Higgs bosons, facilitating precise measurements of Higgs
coupling at sub-percent levels, exceeding the precision expected from the
HL-LHC by an order of magnitude. This Technical Design Report (TDR) follows the
Preliminary Conceptual Design Report (Pre-CDR, 2015) and the Conceptual Design
Report (CDR, 2018), comprehensively detailing the machine's layout and
performance, physical design and analysis, technical systems design, R&D and
prototyping efforts, and associated civil engineering aspects. Additionally, it
includes a cost estimate and a preliminary construction timeline, establishing
a framework for forthcoming engineering design phase and site selection
procedures. Construction is anticipated to begin around 2027-2028, pending
government approval, with an estimated duration of 8 years. The commencement of
experiments could potentially initiate in the mid-2030s.Comment: 1106 page
Distributed processing of large remote sensing images using MapReduce - A case of Edge Detection
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Advances in sensor technology and their ever increasing repositories of the collected data are revolutionizing the mechanisms remotely sensed data are collected, stored and processed. This exponential growth of data archives and the increasing userâs demand for real-and near-real time remote sensing data products has pressurized remote sensing service providers to deliver the required services. The remote sensing community has recognized the challenge in processing large and complex satellite datasets to derive customized products. To address this high demand in computational resources, several efforts have been made in the past few years towards incorporation of high-performance computing models in remote sensing data collection, management and analysis. This study adds an impetus to these efforts by introducing the recent advancements in distributed computing technologies, MapReduce programming paradigm, to the area of remote sensing. The MapReduce model which is developed by Google Inc. encapsulates the efforts of distributed computing in a highly simplified single library. This simple but powerful programming model can provide us distributed environment without having deep knowledge of parallel programming. This thesis presents a MapReduce based processing of large satellite images a use case scenario of edge detection methods. Deriving from the conceptual massive remote sensing image processing applications, a prototype of edge detection methods was implemented on MapReduce framework using its open-source implementation, the Apache Hadoop environment. The experiences of the implementation of the MapReduce model of Sobel, Laplacian, and Canny edge detection methods are presented. This thesis also presents the results of the evaluation the effect of parallelization using MapReduce on the quality of the output and the execution time performance tests conducted based on various performance metrics. The MapReduce algorithms were executed on a test environment on heterogeneous cluster that supports the Apache Hadoop open-source software. The successful implementation of the MapReduce algorithms on a distributed environment demonstrates that MapReduce has a great potential for scaling large-scale remotely sensed images processing and perform more complex geospatial problems
Measurement of the Triple-Differential Cross-Section for the Production of Multijet Events using 139 fb^{-1} of Proton-Proton Collision Data at \sqrt{s} = 13 TeV with the ATLAS Detector to Disentangle Quarks and Gluons at the Large Hadron Collider
At hadron-hadron colliders, it is almost impossible to obtain pure samples in either quark-
or gluon-initialized hadronic showers as one always deals with a mixture of particle jets.
The analysis presented in this dissertation aims to break the aforementioned degeneracy by
extracting the underlying fractions of (light) quarks and gluons through a measurement of the
relative production rates of multijet events.
A measurement of the triple-differential multijet cross section at a centre-of-mass energy of
13 TeV using an integrated luminosity of 139 fb â1 of data collected with the ATLAS detector
in proton-proton collisions at the Large Hadron Collider (LHC) is presented. The cross section
is measured as a function of the transverse momentum p T , two categories of pseudorapidity
η rel defined by the relative orientation between the jets, as well as a Jet Sub-Structure (JSS)
observable O JSS , sensitive to the quark- or gluon-like nature of the hadronic shower of the two
leading-p T jets with 250 GeV < p T < 4.5 TeV and |η| < 2.1 in the event.
The JSS variables, which have been studied within the context of this thesis, can broadly be
divided into two categories: one set of JSS observables is constructed by iteratively declustering
and counting the jetâs charged constituents; the second set is based on the output predicted by
Deep Neural Networks (DNNs) derived from the âdeep setsâ paradigm to implement permutation
invariant functions over sets, which are trained to discriminate between quark- and gluon-
initialized showers in a supervised fashion.
All JSS observables are measured based on Inner Detector tracks with p T > 500 MeV
and |η| < 2.5 to maintain strong correlations between detector- and particle-level objects.
The reconstructed spectra are fully corrected for acceptance and detector effects, and the
unfolded cross section is compared to various state-of-the-art parton shower Monte Carlo
models. Several sources of systematic and statistical uncertainties are taken into account that
are fully propagated through the entire unfolding procedure onto the final cross section. The
total uncertainty on the cross section varies between 5 % and 20 % depending on the region of
phase space.
The unfolded multi-differential cross sections are used to extract the underlying fractions
and probability distributions of quark- and gluon-initialized jets in a solely data-driven, model-
independent manner using a statistical demixing procedure (âjet topicsâ), which has originally
been developed as a tool for extracting emergent themes in an extensive corpus of text-based
documents. The obtained fractions are model-independent and are based on an operational
definition of quark and gluon jets that does not seek to assign a binary label on a jet-to-jet basis,
but rather identifies quark- and gluon-related features on the level of individual distributions,
avoiding common theoretical and conceptional pitfalls regarding the definition of quark and
gluon jets.
The total fraction of gluon-initialized jets in the multijet sample is (IRC-safely) measured
to be 60.5 ± 0.4(Stat) â 2.4(Syst) % and 52.3 ± 0.4(Stat) â 2.6(Syst) % in central and forward
region, respectively. Furthermore, the gluon fractions are extracted in several exclusive regions
of transverse momentum