48 research outputs found

    Apptainer Without Setuid

    Full text link
    Apptainer (formerly known as Singularity) since its beginning implemented many of its container features with the assistance of a setuid-root program. It still supports that mode, but as of version 1.1.0 it no longer uses setuid by default. This is feasible because it now can mount squashfs filesystems, ext3 filesystems, and overlay filesystems using unprivileged user namespaces and FUSE. It also now enables unprivileged users to build containers, even without requiring system administrators to configure /etc/subuid and /etc/subgid unlike other "rootless" container systems. As a result, all the unprivileged functions can be used nested inside of another container, even if the container runtime prevents any elevated privileges. As of version 1.2.0 Apptainer also supports completely unprivileged encryption of Singularity Image Format (SIF) container files. Performance with a particularly challenging HEP benchmark using the FUSE-based mounts both with and without encryption is essentially identical to the previous methods that required elevated privileges to use the Linux kernel-based counterparts

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    Using Kerberized Lustre Over the WAN for High Energy Physics Data

    Get PDF
    ABSTRACT This paper reports the design and implementation of a secure, wide area network, distributed filesystem by the ExTENCI project (Extending Science Through Enhanced National Cyber Infrastructure) based on lustre. The filesystem is used for remote access to analysis data from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC), and from the Lattice Quantum ChromoDynamics (LQCD) project. Security is provided for by kerberos and reinforced with additional finegrained control using lustre ACLs and quotas. We show the impact of using kerberized lustre on the IO rates of CMS and LQCD applications on client nodes, both real and virtual. Preconfigured images of lustre virtual clients containing the complete software stack ease the difficulty of managing these systems

    2016 Research & Innovation Day Program

    Get PDF
    A one day showcase of applied research, social innovation, scholarship projects and activities.https://first.fanshawec.ca/cri_cripublications/1003/thumbnail.jp

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    CernVM Users Workshop

    No full text

    25th International Conference on Computing in High Energy & Nuclear Physics

    No full text
    The WLCG is modernizing its security infrastructure, replacing X.509 client authentication with the newer industry standard of JSON Web Tokens (JWTs) obtained through the Open ID Connect (OIDC) protocol. There is a wide variety of software available using the standards, but most of it is for Web browser-based applications and doesn’t adapt well to the command line-based software used heavily in High Throughput Computing (HTC). OIDC command line client software did exist, but it did not meet our requirements for security and convenience. This paper discusses a command line solution we have made based on the popular existing secrets management software from Hashicorp called vault. We made a package called htvault-config to easily configure a vault service and another called htgettoken to be the vault client. In addition, we have integrated use of the tools into the HTCondor workload management system, although they also work well independent of HTCondor. All of the software is open source, under active development, and ready for use

    Comparison of the Frontier Distributed Database Caching System to NoSQL

    No full text
    Abstract. One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra
    corecore