460 research outputs found

    Schweizerisches Wirtschaftsarchiv, Basel

    Get PDF
    Bestandesübersicht des Schweizerischen Wirtschaftsarchivs in Basel

    CPU Performance Study for HEP Workloads with Respect to the Number of Single-core Slots

    Get PDF
    Many common CPU architectures provide simultaneous multithreading (SMT). The operating system sees multiple logical CPU cores per physical CPU core and can schedule several processes to one physical CPU core. This overbooking of physical cores enables a better usage of parallel pipelines and doubled components within a CPU core. On systems with several applications running in parallel, such as batch jobs on worker nodes, the usage of SMT can increase the overall performance. In high energy physics (HEP) batch/Grid jobs are accounted for in units of single-core jobs. One single-core job is designed to utilize one logical CPU core fully. As a result, Grid sites often configure their worker nodes to provide as many single-core slots as physical or logical CPU cores. However, due to memory and disk space constraints, not all logical CPU cores can be used. Therefore, it can be useful to configure more single-core slots than physical CPU cores but fewer than logical CPU cores per worker node. We have extensively used and studied this strategy at the GridKa Tier 1 center. In this contribution, we show benchmark results for different overbooking factors of physical cores on various CPU models for different HEP workflows and Benchmarks

    Dynamic Resource Extension for Data Intensive Computing with Specialized Software Environments on HPC Systems

    Get PDF
    Modern High Energy Physics (HEP) requires large-scale processing of extensive amounts of scientific data. The needed computing resources are currently provided statically by HEP specific computing centers. To increase the number of available resources, for example to cover peak loads, the HEP computing development team at KIT concentrates on the dynamic integration of additional computing resources into the HEP infrastructure. Therefore, we developed ROCED, a tool to dynamically request and integrate computing resources including resources at HPC centers and commercial cloud providers. Since these resources usually do not support HEP software natively, we rely on virtualization and container technologies, which allows us to run HEP workflows on these so called opportunistic resources. Additionally, we study the efficient processing of huge amounts of data on a distributed infrastructure, where the data is usually stored at HEP specific data centers and is accessed remotely over WAN. To optimize the overall data throughput and to increase the CPU efficiency, we are currently developing an automated caching system for frequently used data that is transparently integrated into the distributed HEP computing infrastructure

    Transparent Integration of Opportunistic Resources into the WLCG Compute Infrastructure

    Get PDF
    The inclusion of opportunistic resources, for example from High Performance Computing (HPC) centers or cloud providers, is an important contribution to bridging the gap between existing resources and future needs by the LHC collaborations, especially for the HL-LHC era. However, the integration of these resources poses new challenges and often needs to happen in a highly dynamic manner. To enable an effective and lightweight integration of these resources, the tools COBalD and TARDIS are developed at KIT. In this contribution we report on the infrastructure we use to dynamically offer opportunistic resources to collaborations in the World Wide LHC Computing Grid (WLCG). The core components are COBalD/TARDIS, HTCondor, CVMFS and modern virtualization technology. The challenging task of managing the opportunistic resources is performed by COBalD/TARDIS. We showcase the challenges, employed solutions and experiences gained with the provisioning of opportunistic resources from several resource providers like university clusters, HPC centers and cloud setups in a multi VO environment. This work can serve as a blueprint for approaching the provisioning of resources from other resource providers

    Advancing throughput of HEP analysis work-flows using caching concepts

    Get PDF
    High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to computing resources for end-user analyses. Bringing data close to the computing resource is a very promising approach to solve throughput limitations and improve the overall performance. However, achieving data locality by placing multiple conventional caches inside a distributed computing infrastructure leads to redundant data placement and inefficient usage of the limited cache volume. The solution is a coordinated placement of critical data on computing resources, which enables matching each process of an analysis work-flow to its most suitable worker node in terms of data locality and, thus, reduces the overall processing time. This coordinated distributed caching concept was realized at KIT by developing the coordination service NaviX that connects an XRootD cache proxy infrastructure with an HTCondor batch system. We give an overview about the coordinated distributed caching concept and experiences collected on prototype system based on NaviX

    HEPScore: A new CPU benchmark for the WLCG

    Full text link
    HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by the WLCG for procurement, computing resource pledges and performance studies. The development of the new benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts of the experiments, representatives of several WLCG computing centres, as well as the WLCG HEPScore Deployment Task Force. In this contribution, we review the selection of workloads and the validation of the new HEPScore benchmark.Comment: Paper submitted to the proceedings of the Computing in HEP Conference 2023, Norfol

    Proceedings of the 4th bwHPC Symposium

    Get PDF
    The bwHPC Symposium 2017 took place on October 4th, 2017, Alte Aula, Tübingen. It focused on the presentation of scientific computing projects as well as on the progress and the success stories of the bwHPC realization concept. The event offered a unique opportunity to engage in an active dialogue between scientific users, operators of bwHPC sites, and the bwHPC support team
    corecore