14 research outputs found

    Advancing throughput of HEP analysis work-flows using caching concepts

    Get PDF
    High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to computing resources for end-user analyses. Bringing data close to the computing resource is a very promising approach to solve throughput limitations and improve the overall performance. However, achieving data locality by placing multiple conventional caches inside a distributed computing infrastructure leads to redundant data placement and inefficient usage of the limited cache volume. The solution is a coordinated placement of critical data on computing resources, which enables matching each process of an analysis work-flow to its most suitable worker node in terms of data locality and, thus, reduces the overall processing time. This coordinated distributed caching concept was realized at KIT by developing the coordination service NaviX that connects an XRootD cache proxy infrastructure with an HTCondor batch system. We give an overview about the coordinated distributed caching concept and experiences collected on prototype system based on NaviX

    25th International Conference on Computing in High Energy & Nuclear Physics

    No full text
    The inclusion of opportunistic resources, for example from High Performance Computing (HPC) centers or cloud providers, is an important contribution to bridging the gap between existing resources and future needs by the LHC collaborations, especially for the HL-LHC era. However, the integration of these resources poses new challenges and often needs to happen in a highly dynamic manner. To enable an effective and lightweight integration of these resources, the tools COBalD and TARDIS are developed at KIT. In this contribution we report on the infrastructure we use to dynamically offer opportunistic resources to collaborations in the World Wide LHC Computing Grid (WLCG). The core components are COBalD/TARDIS, HTCondor, CVMFS and modern virtualization technology. The challenging task of managing the opportunistic resources is performed by COBalD/TARDIS. We showcase the challenges, employed solutions and experiences gained with the provisioning of opportunistic resources from several resource provides like university clusters, HPC centers and cloud setups in a multi VO environment. This work can serve as a blueprint for approaching the provisioning of resources from other resource providers

    MatterMiners/tardis: ErUM Data Cloud Workshop

    No full text
    <p>Release for the ErUM Data Cloud Workshop</p&gt

    Advancing throughput of HEP analysis work-flows using caching concepts

    No full text
    High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to computing resources for end-user analyses. Bringing data close to the computing resource is a very promising approach to solve throughput limitations and improve the overall performance. However, achieving data locality by placing multiple conventional caches inside a distributed computing infrastructure leads to redundant data placement and inefficient usage of the limited cache volume. The solution is a coordinated placement of critical data on computing resources, which enables matching each process of an analysis work-flow to its most suitable worker node in terms of data locality and, thus, reduces the overall processing time. This coordinated distributed caching concept was realized at KIT by developing the coordination service NaviX that connects an XRootD cache proxy infrastructure with an HTCondor batch system. We give an overview about the coordinated distributed caching concept and experiences collected on prototype system based on NaviX

    Advancing throughput of HEP analysis work-flows using caching concepts

    Get PDF
    High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to computing resources for end-user analyses. Bringing data close to the computing resource is a very promising approach to solve throughput limitations and improve the overall performance. However, achieving data locality by placing multiple conventional caches inside a distributed computing infrastructure leads to redundant data placement and inefficient usage of the limited cache volume. The solution is a coordinated placement of critical data on computing resources, which enables matching each process of an analysis work-flow to its most suitable worker node in terms of data locality and, thus, reduces the overall processing time. This coordinated distributed caching concept was realized at KIT by developing the coordination service NaviX that connects an XRootD cache proxy infrastructure with an HTCondor batch system. We give an overview about the coordinated distributed caching concept and experiences collected on prototype system based on NaviX

    3. Helmholtz Open Science Forum Forschungssoftware. Helmholtz Open Science Briefing

    Get PDF
    Am 24. November 2022 veranstaltete das Helmholtz Forum Forschungssoftware eine Informationsveranstaltung zu aktuellen Entwicklungen im Bereich Forschungssoftware in Helmholtz. Das Helmholtz Forum Forschungssoftware wird gemeinsam von der Task Group Forschungssoftware des AK Open Science und dem HIFIS Software Cluster getragen. Die Veranstaltung wurde unter dem Titel „3. Helmholtz Open Science Forum Forschungssoftware“ vom Helmholtz Open Science Office organisiert. Eine erste Veranstaltung des Helmholtz Forum Forschungssoftware fand im Mai 2021 und eine zweite im April 2022 statt. Der vorliegende Bericht dokumentiert die erfolgreiche Veranstaltung, an der rund 90 Mitarbeitende aus Helmholtz teilgenommen haben

    Measurement of the production cross section for a W boson in association with a charm quark in proton-proton collisions at s\sqrt{s} = 13 TeV

    No full text
    The strange quark content of the proton is probed through the measurement of the production cross section for a W boson and a charm (c) quark in proton-proton collisions at a center-of-mass energy of 13 TeV. The analysis uses a data sample corresponding to a total integrated luminosity of 138 fb1^{-1} collected with the CMS detector at the LHC. The W bosons are identified through their leptonic decays to an electron or a muon, and a neutrino. Charm jets are tagged using the presence of a muon or a secondary vertex inside the jet. The W+c production cross section and the cross section ratio Rc±R^\pm_\text{c} = σ\sigma(W+^++cˉ\bar{\text{c}})/σ\sigma(W^-+c\text{c}) are measured inclusively and differentially as functions of the transverse momentum and the pseudorapidity of the lepton originating from the W boson decay. The precision of the measurements is improved with respect to previous studies, reaching 1% in Rc±R^\pm_\text{c}. The measurements are compared with theoretical predictions up to next-to-next-to-leading order in perturbative quantum chromodynamics

    Measurement of the production cross section for a W boson in association with a charm quark in proton-proton collisions at s\sqrt{s} = 13 TeV

    No full text
    International audienceThe strange quark content of the proton is probed through the measurement of the production cross section for a W boson and a charm (c) quark in proton-proton collisions at a center-of-mass energy of 13 TeV. The analysis uses a data sample corresponding to a total integrated luminosity of 138 fb1^{-1} collected with the CMS detector at the LHC. The W bosons are identified through their leptonic decays to an electron or a muon, and a neutrino. Charm jets are tagged using the presence of a muon or a secondary vertex inside the jet. The W+c production cross section and the cross section ratio Rc±R^\pm_\text{c} = σ\sigma(W+^++cˉ\bar{\text{c}})/σ\sigma(W^-+c\text{c}) are measured inclusively and differentially as functions of the transverse momentum and the pseudorapidity of the lepton originating from the W boson decay. The precision of the measurements is improved with respect to previous studies, reaching 1% in Rc±R^\pm_\text{c}. The measurements are compared with theoretical predictions up to next-to-next-to-leading order in perturbative quantum chromodynamics
    corecore