23 research outputs found

    Enabling INFN–T1 to support heterogeneous computing architectures

    Get PDF
    The INFN–CNAF Tier-1 located in Bologna (Italy) is a center of the WLCG e-Infrastructure providing computing power to the four major LHC collaborations and also supports the computing needs of about fifty more groups - also from non HEP research domains. The CNAF Tier1 center has been historically very active putting effort in the integration of computing resources, proposing and prototyping solutions both for extension through Cloud resources, public and private, and with remotely owned sites, as well as developing an integrated HTC+HPC system with the PRACE CINECA supercomputer center located 8Km far from the CNAF Tier-1 located in Bologna. In order to meet the requirements for the new Tecnopolo center, where the CNAF Tier-1 will be hosted, the resource integration activities keep progressing. In particular, this contribution will detail the challenges that have recently been addressed, providing opportunistic access to non standard CPU architectures, such as PowerPC and hardware accelerators (GPUs). We explain the approach adopted to both transparently provision x86_64, ppc64le and NVIDIA V100 GPUs from the Marconi 100 HPC cluster managed by CINECA and to access data from the Tier1 storage system at CNAF. The solution adopted is general enough to enable seamless integration of other computing architectures at the same time from different providers, such as ARM CPUs from the TEXTAROSSA project, and we report about the integration of these within the computing model of the CMS experiment. Finally we will discuss the results of the early experience

    Analisi della evoluzione delle tecnologie hardware per il calcolo scientifico

    Get PDF
    Gli esperimenti della fisica HEP avranno bisogno, nel prossimo decennio, di strumenti di calcolo di enorme potenza, che le tecnologie attuali non potranno soddisfare. Questo lavoro è un'analisi dell'evoluzione delle tecnologie di CPU, di storage e di rete, volta a valutare alcuni degli indicatori (prestazioni, consumi) su cui poter basare la progettazione di un centro di calcolo valida per i prossimi 10 anni. Previsioni sull'evoluzione tecnologica di tale durata sono facilmente soggette ad errori di valutazione anche macroscopici: roadmap valide solo per tre-quattro anni, inaspettati balzi tecnologici, imprevedibili mutamenti di andamento del mercato, possono facilmente cambiare o anche ribaltare previsioni così a lungo protratte nel futuro, ed invalidare estrapolazioni per quanto accurate. Ciononostante, un'analisi della situazione e di quello che oggi può essere detto in relazione all'evoluzione di queste tecnologie è un punto di partenza senza il quale non sarebbe possibile fare alcun tipo di ipotesi

    Photodisintegration of 4^4He into p+t

    Full text link
    The two-body photodisintegration of 4^4He into a proton and a triton has been studied using the CEBAF Large-Acceptance Spectrometer (CLAS) at Jefferson Laboratory. Real photons produced with the Hall-B bremsstrahlung-tagging system in the energy range from 0.35 to 1.55 GeV were incident on a liquid 4^4He target. This is the first measurement of the photodisintegration of 4^4He above 0.4 GeV. The differential cross sections for the γ\gamma4^4He→pt\to pt reaction have been measured as a function of photon-beam energy and proton-scattering angle, and are compared with the latest model calculations by J.-M. Laget. At 0.6-1.2 GeV, our data are in good agreement only with the calculations that include three-body mechanisms, thus confirming their importance. These results reinforce the conclusion of our previous study of the three-body breakup of 3^3He that demonstrated the great importance of three-body mechanisms in the energy region 0.5-0.8 GeV .Comment: 13 pages submitted in one tgz file containing 2 tex file and 22 postscrip figure

    Exclusive Photoproduction of the Cascade (Xi) Hyperons

    Full text link
    We report on the first measurement of exclusive Xi-(1321) hyperon photoproduction in gamma p --> K+ K+ Xi- for 3.2 < E(gamma) < 3.9 GeV. The final state is identified by the missing mass in p(gamma,K+ K+)X measured with the CLAS detector at Jefferson Laboratory. We have detected a significant number of the ground-state Xi-(1321)1/2+, and have estimated the total cross section for its production. We have also observed the first excited state Xi-(1530)3/2+. Photoproduction provides a copious source of Xi's. We discuss the possibilities of a search for the recently proposed Xi5-- and Xi5+ pentaquarks.Comment: submitted to Phys. Rev.

    Migrating the INFN-CNAF datacenter to the Bologna Tecnopolo: A status update

    Get PDF
    The INFN Tier1 data center is currently located in the premises of the Physics Department of the University of Bologna, where CNAF is also located. During 2023 it will be moved to the “Tecnopolo”, the new facility for research, innovation, and technological development in the same city area; the same location is also hosting Leonardo, the pre-exascale supercomputing machine managed by CINECA, co-financed as part of the EuroHPC Joint Undertaking, 4th ranked in the top500 November 2022 list. The construction of the new CNAF data center consists of two phases, corresponding to the computing requirements of LHC: Phase 1 involves an IT power of 3 MW, and Phase 2, starting from 2025, involves an IT power up to 10 MW. The new data center is designed to cope with the computing requirements of the data taking of the HL-LHC experiments, in the time spanning from 2026 to 2040 and will provide, at the same time, computing services for several other INFN experiments and projects, not only belonging to the HEP domain. The co-location with Leonardo opens wider possibilities to integrate HTC and HPC resources and the new CNAF data center will be tightly coupled with it, allowing access from a single entry point to resources located at CNAF and provided by the supercomputer. Data access from both infrastructures will be transparent to users. In this presentation we describe the new data center design, providing a status update on the migration, and we focus on the Leonardo integration showing the results of the preliminary tests to access it from the CNAF access points

    HSM and backup services at INFN-CNAF

    Get PDF
    IBM Spectrum Protect (ISP) software, one of the leader solutions in data protection, contributes to the data management infrastructure operated at CNAF, the central computing and storage facility of INFN (Istituto Nazionale di Fisica Nucleare – Italian National Institute for Nuclear Physics). It is used to manage about 55 Petabytes of scientific data produced by LHC (Large Hadron Collider at CERN) and other experiments in which INFN is involved, stored on tape resources as the highest latency storage tier within HSM (Hierarchical Space Management) environment. To accomplish this task, ISP works together with IBM Spectrum Scale (formerly GPFS - General Parallel File System) and GEMSS (Grid Enabled Mass Storage System), an in-house developed software layer that manages migration and recall queues. Moreover, we perform backup/archive operation of main IT services running at CNAF, such as mail servers, configurations, repositories, documents, logs, etc. In this paper we present the current configuration of the HSM infrastructure and the backup and recovery service, with particular attention to issues related to the increasing amount of scientific data to manage, expected for the next years

    HSM and backup services at INFN-CNAF

    No full text
    IBM Spectrum Protect (ISP) software, one of the leader solutions in data protection, contributes to the data management infrastructure operated at CNAF, the central computing and storage facility of INFN (Istituto Nazionale di Fisica Nucleare – Italian National Institute for Nuclear Physics). It is used to manage about 55 Petabytes of scientific data produced by LHC (Large Hadron Collider at CERN) and other experiments in which INFN is involved, stored on tape resources as the highest latency storage tier within HSM (Hierarchical Space Management) environment. To accomplish this task, ISP works together with IBM Spectrum Scale (formerly GPFS - General Parallel File System) and GEMSS (Grid Enabled Mass Storage System), an in-house developed software layer that manages migration and recall queues. Moreover, we perform backup/archive operation of main IT services running at CNAF, such as mail servers, configurations, repositories, documents, logs, etc. In this paper we present the current configuration of the HSM infrastructure and the backup and recovery service, with particular attention to issues related to the increasing amount of scientific data to manage, expected for the next years

    Recent evolutions in the LTDP CDF project

    Get PDF
    In the latest years, CNAF (the national center of the Italian Institute for Nuclear Physics INFN dedicated to Research and Development on Information and Communication Technologies) has been working on the Long Term Data Preservation (LTDP) project for the CDF experiment, active at Fermilab from 1990 to 2011. The main aims of the project are to protect the most relevant part of the CDF RUN-2 data collected between 2001 and 2011 and already stored on tape at CNAF (4 PB), as well as to ensure the availability and the access to the analysis facility to those data over time. Lately, the CDF database, hosting information about CDF datasets such as their structure, file locations and metadata, has been imported from Fermilab to CNAF. Also, the Sequential Access via Metadata (SAM) station data handling tool for CDF data management, that allows to manage data transfers and to retrieve information from the CDF database, has been properly installed and configured at CNAF. This was a fundamental step in the perspective of a complete decommissioning of CDF services on Fermilab side. An access system has been designed and tested to submit CDF analysis jobs, using CDF software distributed via CERN Virtual Machine File System (CVMFS) and requesting delivery of CDF files stored on CNAF tapes, as well as data present only on Fermilab storage archive. Moreover, the availability and the correctness of all CDF data stored on CNAF tapes has been verified. This paper describes all these recent evolutions in detail, presenting the future plans for the LTDP project at CNAF

    A lightweight high availability strategy for Atlas LCG File Catalogs

    No full text
    The LCG File Catalog is a key component of the LHC Computing Grid middleware [1], as it contains the mapping between Logical File Names and Physical File Names on the Grid. The Atlas computing model foresees multiple local LFC housed in each Tier-1 and Tier-0, containing all information about files stored in the regional cloud. As the local LFC contents are presently not replicated anywhere, this turns out in a dangerous single point of failure for all of the Atlas regional clouds. In order to solve this problem we propose a novel solution for high availability (HA) of Oracle based Grid services, obtained by composing an Oracle Data Guard deployment and a series of application level scripts. This approach has the advantage of being very easy to deploy and maintain, and represents a good candidate solution for all Tier-2s which are usually little centres with little manpower dedicated to service operations. We also present the results of a wide range of functionality and performance tests run on a test-bed having characteristics similar to the ones required for production. The test-bed consists of a failover deployment between the Italian LHC Tier-1 (INFN - CNAF) and an Atlas Tier-2 located at INFN - Roma1. Moreover, we explain how the proposed strategy can be deployed on the present Grid infrastructure, without requiring any change to the middleware and in a way that is totally transparent to end users and applications

    EOS deployment on Ceph RBD/CephFS with K8s

    No full text
    The present activity focused on the integration of different storage systems (EOS [1] and Ceph [2]) with the aim to combine the high level scalability and stability of EOS services with the reliability and redundancy features provided by Ceph. The work has been carried out as part of the collaboration between the national center of INFN (Italian Institute for Nuclear Physics) dedicated to Research and Development on Information and Communication Technologies and the Conseil Européen pour la Recherche Nucléaire (CERN), with the goal of evaluating and testing different technologies for next-generation storage challenges. This work leverages the well-known open-source container orchestration system, Kubernetes [3], for managing file system services. The results obtained by measuring the performances of the different combined technologies, comparing for instance block device and file system as backend options provided by a Ceph cluster deployed on physical machines, are shown and discussed in the manuscript
    corecore