2,033 research outputs found

    Computing models in high energy physics

    Get PDF
    Abstract High Energy Physics Experiments (HEP experiments in the following) have been at least in the last 3 decades at the forefront of technology, in aspects like detector design and construction, number of collaborators, and complexity of data analyses. As uncommon in previous particle physics experiments, the computing and data handling aspects have not been marginal in their design and operations; the cost of the IT related components, from software development to storage systems and to distributed complex e-Infrastructures, has raised to a level which needs proper understanding and planning from the first moments in the lifetime of an experiment. In the following sections we will first try to explore the computing and software solutions developed and operated in the most relevant past and present experiments, with a focus on the technologies deployed; a technology tracking section is presented in order to pave the way to possible solutions for next decade experiments, and beyond. While the focus of this review is on offline computing model, the distinction is a shady one, and some experiments have already experienced contaminations between triggers selection and offline workflows; it is anticipated the trend will continue in the future

    Explorations of the viability of ARM and Xeon Phi for physics processing

    Full text link
    We report on our investigations into the viability of the ARM processor and the Intel Xeon Phi co-processor for scientific computing. We describe our experience porting software to these processors and running benchmarks using real physics applications to explore the potential of these processors for production physics processing.Comment: Submitted to proceedings of the 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP13), Amsterda

    Any Data, Any Time, Anywhere: Global Data Access for Science

    Full text link
    Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data access becomes quite difficult when many independent storage sites are involved because users are burdened with learning the intricacies of accessing each system and keeping careful track of data location. We present an alternate approach: the Any Data, Any Time, Anywhere infrastructure. Combining several existing software products, AAA presents a global, unified view of storage systems - a "data federation," a global filesystem for software delivery, and a workflow management system. We present how one HEP experiment, the Compact Muon Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium on Big Data Computing (BDC) 201

    ICSC: The Italian National Research Centre on HPC, Big Data and Quantum computing

    Get PDF
    ICSC (“Italian Center for SuperComputing”) is one of the five Italian National Centres created within the framework of the NextGenerationEU funding by the European Commission. The aim of ICSC, designed and approved through 2022 and eventually started in September 2022, is to create the national digital infrastructure for research and innovation, leveraging existing HPC, HTC and Big Data infrastructures and evolving towards a cloud data-lake model. It will be available to the scientific and industrial communities through flexible and uniform cloud web interfaces and will be relying on a high-level support team; as such, it will form a globally attractive ecosystem based on strategic public-private partnerships to fully exploit top level digital infrastructure for scientific and technical computing and promote the development of new computing technologies. The ICSC IT infrastructure is built upon existing scientific digital infrastructures as provided by the major national players: GARR, the Italian NREN, provides the network infrastructure, whose capacity will be upgraded to multiples of Tbps; CINECA hosts Leonardo, one of the world largest HPC systems, with a power of over 250 Pflops, to be further increased and complemented with a quantum computer; INFN contributes with its distributed Big Data cloud infrastructure, built in the last decades to respond to the needs of the HEP community. On top of the IT infrastructure, several thematic activities will be funded and will focus on the development of tools and applications in several research domains. Of particular relevance to this audience are the activities on "Fundamental Research and Space Economy" and "Astrophysics and Cosmos Observations", strictly aligned with the INFN and HEP core activities. Finally, two technological research activities will foster research on "Future HPC and Big Data" and "Quantum Computing"

    Enabling INFN–T1 to support heterogeneous computing architectures

    Get PDF
    The INFN–CNAF Tier-1 located in Bologna (Italy) is a center of the WLCG e-Infrastructure providing computing power to the four major LHC collaborations and also supports the computing needs of about fifty more groups - also from non HEP research domains. The CNAF Tier1 center has been historically very active putting effort in the integration of computing resources, proposing and prototyping solutions both for extension through Cloud resources, public and private, and with remotely owned sites, as well as developing an integrated HTC+HPC system with the PRACE CINECA supercomputer center located 8Km far from the CNAF Tier-1 located in Bologna. In order to meet the requirements for the new Tecnopolo center, where the CNAF Tier-1 will be hosted, the resource integration activities keep progressing. In particular, this contribution will detail the challenges that have recently been addressed, providing opportunistic access to non standard CPU architectures, such as PowerPC and hardware accelerators (GPUs). We explain the approach adopted to both transparently provision x86_64, ppc64le and NVIDIA V100 GPUs from the Marconi 100 HPC cluster managed by CINECA and to access data from the Tier1 storage system at CNAF. The solution adopted is general enough to enable seamless integration of other computing architectures at the same time from different providers, such as ARM CPUs from the TEXTAROSSA project, and we report about the integration of these within the computing model of the CMS experiment. Finally we will discuss the results of the early experience

    HEPScore: A new CPU benchmark for the WLCG

    Full text link
    HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by the WLCG for procurement, computing resource pledges and performance studies. The development of the new benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts of the experiments, representatives of several WLCG computing centres, as well as the WLCG HEPScore Deployment Task Force. In this contribution, we review the selection of workloads and the validation of the new HEPScore benchmark.Comment: Paper submitted to the proceedings of the Computing in HEP Conference 2023, Norfol

    The 2003 Tracker Inner Barrel Beam Test

    Get PDF
    Before starting the CMS Silicon Strip Tracker (SST) mass production, where the quality control tests can only be done on single components, an extensive collection of activities aiming at validating the tracker system functionality has been performed. In this framework, a final component prototype of the Inner Barrel part (TIB) of the SST has been assembled and tested in the INFN laboratories and then moved to CERN to check its behaviour in a 25~ns LHC-like particle beam. A set of preproduction single-sided silicon microstrip modules was mounted on a mechanical structure very similar to a sector of the third layer of the TIB and read out using a system functionally identical to the final one. In this note the system setup configuration is fully described and the results of the test, concerning both detector performance and system characteristics, are presented and discussed

    HEP Community White Paper on Software trigger and event reconstruction

    Get PDF
    Realizing the physics programs of the planned and upgraded high-energy physics (HEP) experiments over the next 10 years will require the HEP community to address a number of challenges in the area of software and computing. For this reason, the HEP software community has engaged in a planning process over the past two years, with the objective of identifying and prioritizing the research and development required to enable the next generation of HEP detectors to fulfill their full physics potential. The aim is to produce a Community White Paper which will describe the community strategy and a roadmap for software and computing research and development in HEP for the 2020s. The topics of event reconstruction and software triggers were considered by a joint working group and are summarized together in this document.Comment: Editors Vladimir Vava Gligorov and David Lang

    Migrating the INFN-CNAF datacenter to the Bologna Tecnopolo: A status update

    Get PDF
    The INFN Tier1 data center is currently located in the premises of the Physics Department of the University of Bologna, where CNAF is also located. During 2023 it will be moved to the “Tecnopolo”, the new facility for research, innovation, and technological development in the same city area; the same location is also hosting Leonardo, the pre-exascale supercomputing machine managed by CINECA, co-financed as part of the EuroHPC Joint Undertaking, 4th ranked in the top500 November 2022 list. The construction of the new CNAF data center consists of two phases, corresponding to the computing requirements of LHC: Phase 1 involves an IT power of 3 MW, and Phase 2, starting from 2025, involves an IT power up to 10 MW. The new data center is designed to cope with the computing requirements of the data taking of the HL-LHC experiments, in the time spanning from 2026 to 2040 and will provide, at the same time, computing services for several other INFN experiments and projects, not only belonging to the HEP domain. The co-location with Leonardo opens wider possibilities to integrate HTC and HPC resources and the new CNAF data center will be tightly coupled with it, allowing access from a single entry point to resources located at CNAF and provided by the supercomputer. Data access from both infrastructures will be transparent to users. In this presentation we describe the new data center design, providing a status update on the migration, and we focus on the Leonardo integration showing the results of the preliminary tests to access it from the CNAF access points

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe
    corecore