17 research outputs found

    Scalable Declarative HEP Analysis Workflows for Containerised Compute Clouds

    Get PDF
    We describe a novel approach for experimental High-Energy Physics (HEP) data analyses that is centred around the declarative rather than imperative paradigm when describing analysis computational tasks. The analysis process can be structured in the form of a Directed Acyclic Graph (DAG), where each graph vertex represents a unit of computation with its inputs and outputs, and the graph edges describe the interconnection of various computational steps. We have developed REANA, a platform for reproducible data analyses, that supports several such DAG workflow specifications. The REANA platform parses the analysis workflow and dispatches its computational steps to various supported computing backends (Kubernetes, HTCondor, Slurm). The focus on declarative rather than imperative programming enables researchers to concentrate on the problem domain at hand without having to think about implementation details such as scalable job orchestration. The declarative programming approach is further exemplified by a multi-level job cascading paradigm that was implemented in the Yadage workflow specification language. We present two recent LHC particle physics analyses, ATLAS searches for dark matter and CMS jet energy correction pipelines, where the declarative approach was successfully applied. We argue that the declarative approach to data analyses, combined with recent advancements in container technology, facilitates the portability of computational data analyses to various compute backends, enhancing the reproducibility and the knowledge preservation behind particle physics data analyses.Peer reviewe

    Reinterpretation of LHC Results for New Physics: Status and recommendations after Run 2

    Get PDF
    We report on the status of efforts to improve the reinterpretation of searches and measurements at the LHC in terms of models for new physics, in the context of the LHC Reinterpretation Forum. We detail current experimental offerings in direct searches for new particles, measurements, technical implementations and Open Data, and provide a set of recommendations for further improving the presentation of LHC results in order to better enable reinterpretation in the future. We also provide a brief description of existing software reinterpretation frameworks and recent global analyses of new physics that make use of the current data

    Calibration of SuperCDMS dark matter detectors for low-mass WIMPs

    No full text
    Observational evidence suggests that the majority of mass in the universe takes the form of non-luminous "dark matter". The Super Cryogenic Dark Matter Search (SuperCDMS) is a direct-detection dark matter experiment that searches primarily for a well-motivated dark matter candidate known as the weakly-interacting massive particle (WIMP). The experiment looks for an above-background excess of nuclear recoil events in cryogenic solid-state detectors that could be attributed to WIMP-nucleon collisions. The most recent SuperCDMS run at the Soudan underground laboratory set a world-leading limit on the spin-independent WIMP-nucleon cross section for a WIMP mass as low as ~3 GeV/c², and the next installation of the experiment at SNOLAB aims to be sensitive to WIMP masses below 1 GeV/c². To better understand the response of solid-state germanium detectors to low-mass WIMPs, "photoneutron" calibration data was taken at the Soudan laboratory in Minnesota by passing quasi-monoenergetic neutrons through SuperCDMS detectors. Gamma rays used in the photoneutron production process create an overwhelmingly dominant background of electron recoil events in the detector. This gamma background is measured directly with regular "neutron-off" data-taking periods during which the neutron production mechanism is removed. We compare the observed electron and nuclear recoil spectra with Geant4-simulated spectra to obtain a model-dependent calibration of the nuclear recoil energy scale of the detectors. The calibration is performed using a negative log likelihood fit to a parameterized Lindhard ionization yield model. The fit includes a semi-analytical model of the gamma background component obtained from the neutron-off data.Science, Faculty ofPhysics and Astronomy, Department ofGraduat

    Search for Dark Matter Produced in pppp Collisions with the ATLAS Detector

    No full text
    Longstanding evidence from observational astronomy indicates that non-luminous "dark matter" constitutes the majority of all matter in the universe, yet this mysterious form of matter continues to elude experimental detection. This dissertation presents a search for dark matter at the Large Hadron Collider using 139 fb1^{-1} of proton-proton collision data at a centre-of-mass energy of s=13\sqrt{s} = 13 TeV, recorded with the ATLAS detector from 2015 to 2018. The search targets a final state topology in which dark matter is produced from the proton-proton collisions in association with a pair of W bosons, one of which decays to a pair of quarks and the other to a lepton-neutrino pair. The dark matter is expected to pass invisibly through the detector, resulting in an imbalance of momentum in the plane transverse to the beam line. The search is optimized to test the Dark Higgs model, which predicts a signature of dark matter production in association with the emission of a hypothesized new particle referred to as the Dark Higgs boson. The Dark Higgs boson is predicted to decay to a W boson pair via a small mixing with the Standard Model Higgs boson discovered in 2012. Collisions that exhibit the targeted final state topology are selected for the search, and an approximate mass of the hypothetical Dark Higgs boson is reconstructed from the particles in each collision. A search is performed by looking for a deviation between distributions of the reconstructed Dark Higgs boson masses and Standard Model predictions for the selected collisions. The data is found to be consistent with the Standard Model prediction, and the results are used to constrain the parameters of the Dark Higgs model. This search complements and extends the reach of existing searches for the Dark Higgs model by the ATLAS and CMS collaborations

    Searches for dark matter with the ATLAS detector

    No full text
    The presence of a non-baryonic Dark Matter (DM) component in the Universe is inferred from the observation of its gravitational interaction. If Dark Matter interacts weakly with the Standard Model (SM) it could be produced at the LHC. The ATLAS experiment has developed a broad search program for DM candidates, including resonance searches for the mediator which would couple DM to the SM, searches with large missing transverse momentum produced in association with other particles (light and heavy quarks, photons, Z and H bosons) called mono-X searches and searches where the Higgs boson provides a portal to Dark Matter, leading to invisible Higgs decays. The results of recent searches on 13 TeV pp data, their interplay and interpretation will be presented

    Alternative Fuels and Powertrains to Decarbonize Heavy Duty Trucking

    No full text
    Amid mounting urgency to rapidly decarbonize the global economy in the coming decades, the trucking industry sits on the cusp of a dramatic transition to low-carbon alternative fuels and powertrains. Technological trends suggest that multiple solutions will emerge in the near term to fill different niches of the trucking market. Faced with a diverse and continually evolving space of alternatives, each with its own set of up-front costs and risks, industry stakeholders report decision paralysis when it comes to navigating the transition. In the coming years, a valley of death period is anticipated during which up-front costs of purchasing alternative vehicles and installing refueling infrastructure will be high, and availability of public infrastructure will be limited. Governments have a crucial role to play in providing the regulation and incentives needed to ensure that companies and customers are able and willing to pay higher costs and take on financial risk to bridge the valley of death. In addition, owing to their geographical flexibility and capacity to take on up-front costs and risk, there is an opportunity for large fleets to leverage first mover advantages in the space and take the lead in piloting and adopting alternative fuels and powertrains. Informed by perspectives from industry members of the MIT Climate & Sustainability Consortium (MCSC), and insights shared by invited experts from academia and industry during a study panel hosted by the MCSC, we identify near-term priorities to support industry stakeholders in overcoming decision paralysis, navigating the valley of death, and positioning trucking fleets to thrive as the industry transitions to alternative fuels and powertrains.MIT Climate & Sustainability Consortiu

    Cloudscheduler: a VM provisioning system for a distributed compute cloud

    No full text
    We describe a high-throughput computing system for running jobs on public and private computing clouds using the HTCondor job scheduler and the cloudscheduler VM provisioning service. The distributed cloud computing system is designed to simultaneously use dedicated and opportunistic cloud resources at local and remote locations. It has been used for large scale production particle physics workloads for many years using thousands of cores on three continents. Cloudscheduler has been modernized to take advantage of new software designs, improved operating system capabilities and support packages. The result is a more resilient and scalable system, with expanded capabilities

    Using Kubernetes as an ATLAS computing site

    Get PDF
    In recent years containerization has revolutionized cloud environments, providing a secure, lightweight, standardized way to package and execute software. Solutions such as Kubernetes enable orchestration of containers in a cluster, including for the purpose of job scheduling. Kubernetes is becoming a de facto standard, available at all major cloud computing providers, and is gaining increased attention from some WLCG sites. In particular, CERN IT has integrated Kubernetes into their cloud infrastructure by providing an interface to instantly create Kubernetes clusters, and the University of Victoria is pursuing an infrastructure-as-code approach to deploying Kubernetes as a flexible and resilient platform for running services and delivering resources. The ATLAS experiment at the LHC has partnered with CERN IT and the University of Victoria to explore and demonstrate the feasibility of running an ATLAS computing site directly on Kubernetes, replacing all grid computing services. We have interfaced ATLAS’ workload submission engine PanDA with Kubernetes, to directly submit and monitor the status of containerized jobs. We describe the integration and deployment details, and focus on the lessons learned from running a wide variety of ATLAS production payloads on Kubernetes using clusters of several thousand cores at CERN and the Tier 2 computing site in Victoria

    Observation of WWWWWW Production in pppp Collisions at s\sqrt s =13  TeV with the ATLAS Detector

    No full text
    International audienceThis Letter reports the observation of WWWWWW production and a measurement of its cross section using 139 fb1^{-1} of proton-proton collision data recorded at a center-of-mass energy of 13 TeV by the ATLAS detector at the Large Hadron Collider. Events with two same-sign leptons (electrons or muons) and at least two jets, as well as events with three charged leptons, are selected. A multivariate technique is then used to discriminate between signal and background events. Events from WWWWWW production are observed with a significance of 8.0 standard deviations, where the expectation is 5.4 standard deviations. The inclusive WWWWWW production cross section is measured to be 820±100(stat)±80(syst)820 \pm 100\,\text{(stat)} \pm 80\,\text{(syst)} fb, approximately 2.6 standard deviations from the predicted cross section of 511±18511 \pm 18 fb calculated at next-to-leading-order QCD and leading-order electroweak accuracy
    corecore