437 research outputs found

    Dynamic Resource Extension for Data Intensive Computing with Specialized Software Environments on HPC Systems

    Get PDF
    Modern High Energy Physics (HEP) requires large-scale processing of extensive amounts of scientific data. The needed computing resources are currently provided statically by HEP specific computing centers. To increase the number of available resources, for example to cover peak loads, the HEP computing development team at KIT concentrates on the dynamic integration of additional computing resources into the HEP infrastructure. Therefore, we developed ROCED, a tool to dynamically request and integrate computing resources including resources at HPC centers and commercial cloud providers. Since these resources usually do not support HEP software natively, we rely on virtualization and container technologies, which allows us to run HEP workflows on these so called opportunistic resources. Additionally, we study the efficient processing of huge amounts of data on a distributed infrastructure, where the data is usually stored at HEP specific data centers and is accessed remotely over WAN. To optimize the overall data throughput and to increase the CPU efficiency, we are currently developing an automated caching system for frequently used data that is transparently integrated into the distributed HEP computing infrastructure

    Advancing throughput of HEP analysis work-flows using caching concepts

    Get PDF
    High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to computing resources for end-user analyses. Bringing data close to the computing resource is a very promising approach to solve throughput limitations and improve the overall performance. However, achieving data locality by placing multiple conventional caches inside a distributed computing infrastructure leads to redundant data placement and inefficient usage of the limited cache volume. The solution is a coordinated placement of critical data on computing resources, which enables matching each process of an analysis work-flow to its most suitable worker node in terms of data locality and, thus, reduces the overall processing time. This coordinated distributed caching concept was realized at KIT by developing the coordination service NaviX that connects an XRootD cache proxy infrastructure with an HTCondor batch system. We give an overview about the coordinated distributed caching concept and experiences collected on prototype system based on NaviX

    HEPScore: A new CPU benchmark for the WLCG

    Full text link
    HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by the WLCG for procurement, computing resource pledges and performance studies. The development of the new benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts of the experiments, representatives of several WLCG computing centres, as well as the WLCG HEPScore Deployment Task Force. In this contribution, we review the selection of workloads and the validation of the new HEPScore benchmark.Comment: Paper submitted to the proceedings of the Computing in HEP Conference 2023, Norfol

    Proceedings of the 4th bwHPC Symposium

    Get PDF
    The bwHPC Symposium 2017 took place on October 4th, 2017, Alte Aula, Tübingen. It focused on the presentation of scientific computing projects as well as on the progress and the success stories of the bwHPC realization concept. The event offered a unique opportunity to engage in an active dialogue between scientific users, operators of bwHPC sites, and the bwHPC support team

    Combined searches for the production of supersymmetric top quark partners in proton-proton collisions at root s=13 TeV

    Get PDF
    A combination of searches for top squark pair production using proton-proton collision data at a center-of-mass energy of 13 TeV at the CERN LHC, corresponding to an integrated luminosity of 137 fb(-1) collected by the CMS experiment, is presented. Signatures with at least 2 jets and large missing transverse momentum are categorized into events with 0, 1, or 2 leptons. New results for regions of parameter space where the kinematical properties of top squark pair production and top quark pair production are very similar are presented. Depending on themodel, the combined result excludes a top squarkmass up to 1325 GeV for amassless neutralino, and a neutralinomass up to 700 GeV for a top squarkmass of 1150 GeV. Top squarks with masses from 145 to 295 GeV, for neutralino masses from 0 to 100 GeV, with a mass difference between the top squark and the neutralino in a window of 30 GeV around the mass of the top quark, are excluded for the first time with CMS data. The results of theses searches are also interpreted in an alternative signal model of dark matter production via a spin-0 mediator in association with a top quark pair. Upper limits are set on the cross section for mediator particle masses of up to 420 GeV

    Development and validation of HERWIG 7 tunes from CMS underlying-event measurements

    Get PDF
    This paper presents new sets of parameters (“tunes”) for the underlying-event model of the HERWIG7 event generator. These parameters control the description of multiple-parton interactions (MPI) and colour reconnection in HERWIG7, and are obtained from a fit to minimum-bias data collected by the CMS experiment at s=0.9, 7, and 13Te. The tunes are based on the NNPDF 3.1 next-to-next-to-leading-order parton distribution function (PDF) set for the parton shower, and either a leading-order or next-to-next-to-leading-order PDF set for the simulation of MPI and the beam remnants. Predictions utilizing the tunes are produced for event shape observables in electron-positron collisions, and for minimum-bias, inclusive jet, top quark pair, and Z and W boson events in proton-proton collisions, and are compared with data. Each of the new tunes describes the data at a reasonable level, and the tunes using a leading-order PDF for the simulation of MPI provide the best description of the dat

    Reconstruction of signal amplitudes in the CMS electromagnetic calorimeter in the presence of overlapping proton-proton interactions

    Get PDF
    A template fitting technique for reconstructing the amplitude of signals produced by the lead tungstate crystals of the CMS electromagnetic calorimeter is described. This novel approach is designed to suppress the contribution to the signal of the increased number of out-of-time interactions per beam crossing following the reduction of the accelerator bunch spacing from 50 to 25 ns at the start of Run 2 of the LHC. Execution of the algorithm is sufficiently fast for it to be employed in the CMS high-level trigger. It is also used in the offline event reconstruction. Results obtained from simulations and from Run 2 collision data (2015-2018) demonstrate a substantial improvement in the energy resolution of the calorimeter over a range of energies extending from a few GeV to several tens of GeV.Peer reviewe

    Observation of the Production of Three Massive Gauge Bosons at root s=13 TeV

    Get PDF
    The first observation is reported of the combined production of three massive gauge bosons (VVV with V = W, Z) in proton-proton collisions at a center-of-mass energy of 13 TeV. The analysis is based on a data sample recorded by the CMS experiment at the CERN LHC corresponding to an integrated luminosity of 137 fb(-1). The searches for individualWWW, WWZ, WZZ, and ZZZ production are performed in final states with three, four, five, and six leptons (electrons or muons), or with two same-sign leptons plus one or two jets. The observed (expected) significance of the combinedVVV production signal is 5.7 (5.9) standard deviations and the corresponding measured cross section relative to the standard model prediction is 1.02(-0.23)(+0.26). The significances of the individual WWW and WWZ production are 3.3 and 3.4 standard deviations, respectively. Measured production cross sections for the individual triboson processes are also reported
    corecore