17 research outputs found

    A Subset of the CERN Virtual Machine File System: Fast Delivering of Complex Software Stacks for Supercomputing Resources

    Full text link
    Delivering a reproducible environment along with complex and up-to-date software stacks on thousands of distributed and heterogeneous worker nodes is a critical task. The CernVM-File System (CVMFS) has been designed to help various communities to deploy software on worldwide distributed computing infrastructures by decoupling the software from the Operating System. However, the installation of this file system depends on a collaboration with system administrators of the remote resources and an HTTP connectivity to fetch dependencies from external sources. Supercomputers, which offer tremendous computing power, generally have more restrictive policies than grid sites and do not easily provide the mandatory conditions to exploit CVMFS. Different solutions have been developed to tackle the issue, but they are often specific to a scientific community and do not deal with the problem in its globality. In this paper, we provide a generic utility to assist any community in the installation of complex software dependencies on supercomputers with no external connectivity. The approach consists in capturing dependencies of applications of interests, building a subset of dependencies, testing it in a given environment, and deploying it to a remote computing resource. We experiment this proposal with a real use case by exporting Gauss-a Monte-Carlo simulation program from the LHCb experiment-on Mare Nostrum, one of the top supercomputers of the world. We provide steps to encapsulate the minimum required files and deliver a light and easy-to-update subset of CVMFS: 12.4 Gigabytes instead of 5.2 Terabytes for the whole LHCb repository

    Extending the distributed computing infrastructure of the CMS experiment with HPC resources

    Get PDF
    Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns

    Deploying a Top-100 Supercomputer for Large Parallel Workloads: the Niagara Supercomputer

    Full text link
    Niagara is currently the fastest supercomputer accessible to academics in Canada. It was deployed at the beginning of 2018 and has been serving the research community ever since. This homogeneous 60,000-core cluster, owned by the University of Toronto and operated by SciNet, was intended to enable large parallel jobs and has a measured performance of 3.02 petaflops, debuting at #53 in the June 2018 TOP500 list. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity. It replaced two systems that SciNet operated for over 8 years, the Tightly Coupled System (TCS) and the General Purpose Cluster (GPC). In this paper we describe the transition process from these two systems, the procurement and deployment processes, as well as the unique features that make Niagara a one-of-a-kind machine in Canada.Comment: PEARC'19: "Practice and Experience in Advanced Research Computing", July 28-August 1, 2019, Chicago, IL, US

    Proceedings of the 5th bwHPC Symposium

    Get PDF
    In modern science, the demand for more powerful and integrated research infrastructures is growing constantly to address computational challenges in data analysis, modeling and simulation. The bwHPC initiative, founded by the Ministry of Science, Research and the Arts and the universities in Baden-Württemberg, is a state-wide federated approach aimed at assisting scientists with mastering these challenges. At the 5th bwHPC Symposium in September 2018, scientific users, technical operators and government representatives came together for two days at the University of Freiburg. The symposium provided an opportunity to present scientific results that were obtained with the help of bwHPC resources. Additionally, the symposium served as a platform for discussing and exchanging ideas concerning the use of these large scientific infrastructures as well as its further development

    DUNE Offline Computing Conceptual Design Report

    Get PDF
    This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing \sim 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions
    corecore