109 research outputs found

    Search for the Higgs Boson in H->WW Decays at the D0 Experiment and Precise Muon Tracking

    Get PDF
    This thesis describes the search for the Higgs boson in H->WW(*) decays in proton anti-proton collisions with data taken at the D0 experiment at the Tevatron collider. The data set was taken between April 2002 and September 2003 and has an integrated luminosity of approximately 147 pb^-1. An analysis of the di-muon decay channel of the W pairs was developed which can be scaled to higher luminosities up to the full data set to be taken until 2009 at the Tevatron collider. The number of events observed in the current data set is consistent with expectations from standard model backgrounds. Since no excess is observed, cross-section limits at 95% confidence level for H->WW(*) production have been calculated both standalone and also in combination with other lepton decay channels. The production of W pairs is one major background in the search of H->WW(*) decays. Hence a first measurement of the WW production cross-section with the D0 experiment is presented. Experience gained during this analysis has shown the precise track reconstruction is an essential tool for both measurements. This thesis closes with a contribution to precise tracking in the ATLAS experiment at the future Large Hadron Collider (LHC). An alignment system for ATLAS muon drift chambers at the cosmic ray measurement facility at LMU Munich is presented.Die vorliegende Arbeit beschreibt die Suche nach dem Higgs Boson in H->WW(*) Zerfällen in Proton-Antiproton-Kollisionen mit Daten des D0 Experiments am Tevatron Beschleuniger. Diese Daten wurden zwischen April 2002 und September 2003 aufgezeichnet und haben eine integrierte Luminosität von etwa 147 pb^-1. Es wurde eine Analyse im Zwei-Myonen Zerfallskanal der W-Paare entwickelt, die auf höhere integrierte Luminositäten und auf den bis zum Jahr 2009 aufzuzeichnenden vollen Datensatz skaliert werden kann. Die Anzahl der beobachteten Ereignisse in den vorliegenden Daten ist konsistent mit den Erwartungen des Standardmodells. Da kein Überschuss gesehen wurde, sind Grenzen auf den Wirkungsquerschnitt der H->WW(*) Produktion auch in der Kombination mit anderen Leptonzerfallskanälen im 95% Vertrauensintervall berechnet worden. Einen Hauptuntergrund zur H->WW(*) Suche stellt die direkte Produktion von W-Paaren dar. Eine erste Messung des Wirkungsquerschnitts von W-Paar-Produktion beim D0 Experiment wird im Anschluß vorgestellt. Die Messung von Spuren mit hoher Genauigkeit ist ein wesentlicher Bestandteil beider Messungen. Die Arbeit schließt mit einem Beitrag zur genauen Spurmessung beim ATLAS Experiment am zukünftigen Large Hadron Collider (LHC). Hierzu wird ein Positionsüberwachungssystem für ATLAS Myondriftkammern am Höhenstrahlungsmeßstand der LMU München vorgestellt

    PHYSLITE - A new reduced common data format for ATLAS

    Get PDF
    The High Luminosity LHC (HL-LHC) era brings unprecedented computing challenges that call for novel approaches to reduce the amount of real and Monte Carlo-simulated data that is stored, while continuing to support the rich physics program of the ATLAS experiment. With the beginning of LHC Run 3, ATLAS introduced a new common data format, PHYS, that replaces most of the analysis-specific formats that were used in Run 2, and therefore reduces the disk storage significantly. ATLAS also launched the prototype of another common format, PHYSLITE, that is about a third of the size of PHYS. PHYSLITE will be the main format for ATLAS at the HL-LHC and aims to serve 80% of all physics analyses. To simplify analysis workloads and further reduce disk usage it is designed to largely replace user-defined analysis n-tuples and consequently contains pre-calibrated objects. Various forms of validations are in place to ensure correct functionality for users. Developments continue towards HL-LHC to improve the PHYSLITE format further

    ATLAS Data Analysis using a Parallel Workflow on Distributed Cloud-based Services with GPUs

    Get PDF
    A new type of parallel workflow is developed for the ATLAS experiment at the Large Hadron Collider, that makes use of distributed computing combined with a cloud-based infrastructure. This has been developed for a specific type of analysis using ATLAS data, one popularly referred to as Simulation-Based Inference (SBI). The JAX library is used for the parts of the workflow to compute gradients as well as accelerate program execution using just-in-time compilation, which becomes essential in a full SBI analysis and can also offer significant speed-ups in more traditional types of analysis

    Extending Rucio with modern cloud storage support

    Get PDF
    Rucio is a software framework designed to facilitate scientific collaborations in efficiently organising, managing, and accessing extensive volumes of data through customizable policies. The framework enables data distribution across globally distributed locations and heterogeneous data centres, integrating various storage and network technologies into a unified federated entity. Rucio offers advanced features like distributed data recovery and adaptive replication, and it exhibits high scalability, modularity, and extensibility. Originally developed to meet the requirements of the high-energy physics experiment ATLAS, Rucio has been continuously expanded to support LHC experiments and diverse scientific communities. Recent R&D projects within these communities have evaluated the integration of both private and commercially-provided cloud storage systems, leading to the development of additional functionalities for seamless integration within Rucio. Furthermore, the underlying systems, FTS and GFAL/Davix, have been extended to cater to specific use cases. This contribution focuses on the technical aspects of this work, particularly the challenges encountered in building a generic interface for self-hosted cloud storage, such as MinIO or CEPH S3 Gateway, and established providers like Google Cloud Storage and Amazon Simple Storage Service. Additionally, the integration of decentralised clouds like SEAL is explored. Key aspects, including authentication and authorisation, direct and remote access, throughput and cost estimation, are highlighted, along with shared experiences in daily operations

    The ATLAS experiment software on ARM

    Get PDF
    With an increased dataset obtained during the Run 3 of the LHC at CERN and the even larger expected increase of the dataset by more than one order of magnitude for the HL-LHC, the ATLAS experiment is reaching the limits of the current data processing model in terms of traditional CPU resources based on x86_64 architectures and an extensive program for software upgrades towards the HL-LHC has been set up. The ARM architecture is becoming a competitive and energy efficient alternative. Some surveys indicate its increased presence in HPCs and commercial clouds, and some WLCG sites have expressed their interest. Chip makers are also developing their next generation solutions on ARM architectures, sometimes combining ARM and GPU processors in the same chip. Consequently it is important that the ATLAS software embraces the change and is able to successfully exploit this architecture. We report on the successful porting to ARM of the Athena software framework, which is used by ATLAS for both online and offline computing operations. Furthermore we report on the successful validation of simulation workflows running on ARM resources. For this we have set up an ATLAS Grid site using ARM compatible middleware and containers on Amazon Web Services (AWS) ARM resources. The ARM version of Athena is fully integrated in the regular software build system and distributed in the same way as other software releases. In addition, the workflows have been integrated into the HEPscore benchmark suite which is the planned WLCG wide replacement of the HepSpec06 benchmark used for Grid site pledges. In the overall porting process we have used resources on AWS, Google Cloud Platform (GCP) and CERN. A performance comparison of different architectures and resources will be discussed

    Search for direct pair production of the top squark in all-hadronic final states in proton-proton collisions at s√ = 8 TeV with the ATLAS detector

    Get PDF
    The results of a search for direct pair production of the scalar partner to the top quark using an integrated luminosity of 20.1 fb−1 of proton-proton collision data at s√ = 8 TeV recorded with the ATLAS detector at the LHC are reported. The top squark is assumed to decay via t¯ →tχ¯01 or t¯ →bχ¯±1 →bW(∗)χ¯01 , where χ¯01 (χ¯±1) denotes the lightest neutralino (chargino) in supersymmetric models. The search targets a fully-hadronic final state in events with four or more jets and large missing transverse momentum. No significant excess over the Standard Model background prediction is observed, and exclusion limits are reported in terms of the top squark and neutralino masses and as a function of the branching fraction of t¯ →tχ¯01 . For a branching fraction of 100%, top squark masses in the range 270–645 GeV are excluded for χ¯01 masses below 30 GeV. For a branching fraction of 50% to either t¯ →tχ¯01 or t¯ →bχ¯±1 , and assuming the χ¯±1 mass to be twice the χ¯01 mass, top squark masses in the range 250–550 GeV are excluded for χ¯01 masses below 60 GeV

    Accelerating science: The usage of commercial clouds in ATLAS Distributed Computing

    Get PDF
    The ATLAS experiment at CERN is one of the largest scientific machines built to date and will have ever growing computing needs as the Large Hadron Collider collects an increasingly larger volume of data over the next 20 years. ATLAS is conducting R&D projects on Amazon Web Services and Google Cloud as complementary resources for distributed computing, focusing on some of the key features of commercial clouds: lightweight operation, elasticity and availability of multiple chip architectures. The proof of concept phases have concluded with the cloud-native, vendoragnostic integration with the experiment’s data and workload management frameworks. Google Cloud has been used to evaluate elastic batch computing, ramping up ephemeral clusters of up to O(100k) cores to process tasks requiring quick turnaround. Amazon Web Services has been exploited for the successful physics validation of the Athena simulation software on ARM processors. We have also set up an interactive facility for physics analysis allowing endusers to spin up private, on-demand clusters for parallel computing with up to 4 000 cores, or run GPU enabled notebooks and jobs for machine learning applications. The success of the proof of concept phases has led to the extension of the Google Cloud project, where ATLAS will study the total cost of ownership of a production cloud site during 15 months with 10k cores on average, fully integrated with distributed grid computing resources and continue the R&D projects

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    Search for single production of vector-like quarks decaying into Wb in pp collisions at s=8\sqrt{s} = 8 TeV with the ATLAS detector

    Get PDF
    corecore