10 research outputs found

    CERN Infrastructure Evolution

    No full text
    The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future, and in the likely scenario that any extension will be remote from CERN, and in the light of the way other large facilities are today being operated. Over the past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation, clouds and fabric management in order to reduce operational effort, increase agility and support unattended remote computer centres. This presentation will give the details on the project’s motivations, current status and areas for future investigation

    LHC experimental data: from today's data challenges to the promise of tomorrow

    No full text
    The LHC experiments constitute a challenge in several discipline in both High Energy Physics and Information Technologies. This is definitely the case for data acquisition, processing and analysis. This challenge has been addressed by many years or R&D activity during which prototypes of components or subsystems have been developed. This prototyping phase is now culminating with an evaluation of the prototypes in large-scale tests (approximately called "Data Challenges"). In a period of restricted funding, the expectation is to realize the LHC data acquisition and computing infrastructures by making extensive use of standard and commodity components. The lectures will start with a brief overview of the requirements of the LHC experiments in terms of data acquisition and computing. The different tasks of the experimental data chain will also be explained: data acquisition, selection, storage, processing and analysis. The major trends of the computing and networking industries will then be indicated with particular attention to their influence on the LHC data acquisition and computing. Finally, the status and the results of the "Data Challenges" performed by the LHC experiments and the IT division will be presented and commented. The vision of the data acquisition and processing system for the LHC era and its promise for tomorrow will conclude this series

    Netbench - large-scale network device testing with real-life traffic patterns

    No full text
    Network performance is key to the correct operation of any modern data centre infrastructure or data acquisition (DAQ) system. Hence, it is crucial to ensure the devices employed in the network are carefully selected to meet the required needs. Specialized commercial testers implement standardized tests [1, 2], which benchmark the performance of network devices under reproducible, yet artificial conditions. Netbench is a network-testing framework, relying on commodity servers and NICs, that enables the evaluation of network devices performance for handling traffic-patterns that closely resemble real-life usage, at a reasonably affordable price. We will present the architecture of the Netbench framework, its capabilities and how they complement the use of specialized commercial testers (e.g. competing TCP flows that create temporary congestion provide a good benchmark of buffering capabilities in real-life scenarios). Last but not least, we will describe how CERN used Netbench for performing large scale tests with partial-mesh and full-mesh TCP flows [3], an essential validation point during its most recent high-end routers call for tender

    Seeking an alternative to tape-based custodial storage

    No full text
    In November 2018, the KISTI Tier-1 centre started a project to design, develop and deploy a disk-based custodial storage with error rate and reliability compatible with a tape-based storage. This project has been conducted in collaboration with KISTI and CERN; especially the initial design was laid out from the intensive discussion with CERN IT and ALICE. The system design of the disk-based custodial storage consisted of high density JBOD enclosures and erasure coding data protection, implemented in EOS, the open-source storage management developed at CERN. In order to balance the system reliability, data security and I/O performance, we investigated the possible SAS connections of JBOD enclosures to the front-end node managed by EOS and the technology constraints of interconnections in terms of throughput to accommodate large number of disks foreseen in the storage. This project will be completed and enter production before the start of LHC Run3 in 2021. In this paper we present the detailed description on the initial system design, the brief results of test equipment for the procurement, the deployment of the system, and the further plans for the project

    Seeking an alternative to tape-based custodial storage

    Get PDF
    In November 2018, the KISTI Tier-1 centre started a project to design, develop and deploy a disk-based custodial storage with error rate and reliability compatible with a tape-based storage. This project has been conducted in collaboration with KISTI and CERN; especially the initial design was laid out from the intensive discussion with CERN IT and ALICE. The system design of the disk-based custodial storage consisted of high density JBOD enclosures and erasure coding data protection, implemented in EOS, the open-source storage management developed at CERN. In order to balance the system reliability, data security and I/O performance, we investigated the possible SAS connections of JBOD enclosures to the front-end node managed by EOS and the technology constraints of interconnections in terms of throughput to accommodate large number of disks foreseen in the storage. This project will be completed and enter production before the start of LHC Run3 in 2021. In this paper we present the detailed description on the initial system design, the brief results of test equipment for the procurement, the deployment of the system, and the further plans for the project

    A Disk-based Archival Storage System Using the EOS Erasure Coding Implementation for the ALICE Experiment at the CERN LHC

    No full text
    Korea Institute of Science and Technology Information (KISTI) is a Worldwide LHC Computing Grid (WLCG) Tier-1 center mandated to preserve raw data produced from A Large Ion Collider Experiment (ALICE) experiment using the world’s largest particle accelerator, the Large Hadron Collider (LHC) at European Organization for Nuclear Research (CERN). Physical medium used widely for long-term data preservation is tape, thanks to its reliability and least price per capacity compared to other media such as optical disk, hard disk, and solid-state disk. However, decreasing numbers of manufacturers for both tape drives and cartridges, and patent disputes among them escalated risk of market. As alternative to tape-based data preservation strategy, we proposed disk-only erasure-coded archival storage system, Custodial Disk Storage (CDS), powered by Exascale Open Storage (EOS), an open-source storage management software developed by CERN. CDS system consists of 18 high density Just-Bunch-Of-Disks (JBOD) enclosures attached to 9 servers through 12 Gbps Serial Attached SCSI (SAS) Host Bus Adapter (HBA) interfaces via multiple paths for redundancy and multiplexing. For data protection, we introduced Reed-Solomon (RS) (16, 4) Erasure Coding (EC) layout, where the number of data and parity blocks are 12 and 4 respectively, which gives the annual data loss probability equivalent to 5×10-14. In this paper, we discuss CDS system design based on JBOD products, performance limitations, and data protection strategy accommodating EOS EC implementation. We present CDS operations for ALICE experiment and long-term power consumption measurement

    Trends in computing technologies and markets: The HEPiX TechWatch WG

    Get PDF
    Driven by the need to carefully plan and optimise the resources for the next data taking periods of Big Science projects, such as CERN’s Large Hadron Collider and others, sites started a common activity, the HEPiX Technology Watch Working Group, tasked with tracking the evolution of technologies and markets of concern to the data centres. The talk will give an overview of general and semiconductor markets, server markets, CPUs and accelerators, memories, storage and networks; it will highlight important areas of uncertainties and risks

    Trends in computing technologies and markets: The HEPiX TechWatch WG

    No full text
    Driven by the need to carefully plan and optimise the resources for the next data taking periods of Big Science projects, such as CERN’s Large Hadron Collider and others, sites started a common activity, the HEPiX Technology Watch Working Group, tasked with tracking the evolution of technologies and markets of concern to the data centres. The talk will give an overview of general and semiconductor markets, server markets, CPUs and accelerators, memories, storage and networks; it will highlight important areas of uncertainties and risks
    corecore