21 research outputs found

    CernVM Users Workshop

    No full text
    IceCube is a cubic kilometer neutrino detector located at the south pole. CVMFS is a key component to IceCube’s Distributed High Throughput Computing analytics workflow for sharing 500GB of software across datacenters worldwide. Building the IceCube software suite across multiple platforms and deploying it into CVMFS has until recently been a manual, time consuming task that doesn’t fit well within an agile continuous delivery framework. Within the last 2 years a plethora of tooling around microservices has created an opportunity to upgrade the IceCube software build and deploy pipeline. We present a framework using Kubernetes to deploy Buildbot. The Buildbot pipeline is a set of pods (docker containers) in the Kubernetes cluster that builds the IceCube software across multiple platforms, tests the new software for critical errors, syncs the software to a containerized CVMFS server, and finally executes a publish. The time from code commit to CVMFS publish has been greatly reduced and has enabled the capability of publishing nightly builds to CVMFS

    CVMFS: Stratum 0 in Kubernetes

    No full text
    IceCube is a cubic kilometer neutrino detector located at the south pole. CVMFS is a key component to IceCube’s Distributed High Throughput Computing analytics workflow for sharing 500GB of software across datacenters worldwide. Building the IceCube software suite across multiple platforms and deploying it into CVMFS has until recently been a manual, time consuming task that doesn’t fit well within an agile continuous delivery framework. Within the last 2 years a plethora of tooling around microservices has created an opportunity to upgrade the IceCube software build and deploy pipeline. We present a framework using Kubernetes to deploy Buildbot. The Buildbot pipeline is a set of pods (docker containers) in the Kubernetes cluster that builds the IceCube software across multiple platforms, tests the new software for critical errors, syncs the software to a containerized CVMFS server, and finally executes a publish. The time from code commit to CVMFS publish has been greatly reduced and has enabled the capability of publishing nightly builds to CVMFS

    CVMFS: Stratum 0 in Kubernetes

    Get PDF
    IceCube is a cubic kilometer neutrino detector located at the south pole. CVMFS is a key component to IceCube’s Distributed High Throughput Computing analytics workflow for sharing 500GB of software across datacenters worldwide. Building the IceCube software suite across multiple platforms and deploying it into CVMFS has until recently been a manual, time consuming task that doesn’t fit well within an agile continuous delivery framework. Within the last 2 years a plethora of tooling around microservices has created an opportunity to upgrade the IceCube software build and deploy pipeline. We present a framework using Kubernetes to deploy Buildbot. The Buildbot pipeline is a set of pods (docker containers) in the Kubernetes cluster that builds the IceCube software across multiple platforms, tests the new software for critical errors, syncs the software to a containerized CVMFS server, and finally executes a publish. The time from code commit to CVMFS publish has been greatly reduced and has enabled the capability of publishing nightly builds to CVMFS

    IceCube File Catalog

    No full text
    IceCube is a cubic kilometer neutrino detector located at the south pole. Metadata for files in IceCube have traditionally been handled on an application by application basis, with no user-facing access. There has been no unified view of data files, and users often query the filesystem to locate files. Recently effort has been put into creating a unified view in a central metadata catalog. Going for a simple solution, we created a userfacing REST API backed by a NoSQL database. All major data producers add their metadata to this central catalog. Schema generation is identified as an important aspect of multi-application metadata services

    IceCube File Catalog

    Get PDF
    IceCube is a cubic kilometer neutrino detector located at the south pole. Metadata for files in IceCube have traditionally been handled on an application by application basis, with no user-facing access. There has been no unified view of data files, and users often query the filesystem to locate files. Recently effort has been put into creating a unified view in a central metadata catalog. Going for a simple solution, we created a userfacing REST API backed by a NoSQL database. All major data producers add their metadata to this central catalog. Schema generation is identified as an important aspect of multi-application metadata services

    WIPACrepo/iceprod: v2.5.3

    No full text
    <p>Support for SuperComputing demo. This is the code that was used during the demo on 2019-11-16. Plenty of features and fixes along the way.</p> <p>Features:</p> <ul> <li>2eb6c19: use ceph s3 for logs</li> <li>838a212: site tracking when processing tasks</li> <li>30406b0: display task stats on website</li> <li>74a2af1: allow gzipped iceprod logs</li> <li>9d1bf56: parse config file before submission for non-pilot submitters</li> </ul> <p>Bugfixes:</p> <ul> <li>b4803b5: run job completion on truncated datasets too</li> <li>d4fd003: for manual running of jobs, register a pilot so it isn't reset</li> <li>38719f7: properly propagate cmd error in loader.sh</li> </ul&gt

    WIPACrepo/iceprod: v2.5.1

    No full text
    <p>Cleanups from 2.5.0, with minor niceties. Also, CIrcleCI support and auto-building docs.</p> <p>Features:</p> <ul> <li>4d43aa2: make every ldap login a user by default </li> <li>b702a97, 54e1250: task file API</li> <li>acb3861: add new scheduled task for cleaning up bad pilots </li> </ul> <p>Bugfixes:</p> <ul> <li>f3449fd: fix PYGLIDEIN_TIME_TO_LIVE not an int - use condor classads</li> <li>0cc7f23, bd00be6: fix k8s gpu hashes</li> <li>b3dee01: fix task stdout/stderr collection</li> </ul&gt

    WIPACrepo/iceprod: v2.5.4

    No full text
    <p>Several minor features for usability, and support for SuperComputing demo v2. This is the code that was used during the demo on 2020-02-04.</p> <p>Features:</p> <ul> <li>c882ff8: Add a 'configs' module option, to write out a json config file into the module working directory</li> <li>aa23eaa: Allow website access to past 10 logs instead of only one</li> <li>6a63061: Allow filtering and projection for /pilots, and use it in grid commands </li> </ul> <p>Bugfixes:</p> <ul> <li>26b4165: Fix stdout/stderr not being recorded</li> <li>4d86e21: No more duplicate pilot ids, so we don't delete a pilot every cycle</li> </ul&gt

    WIPACrepo/iceprod: v2.5.6

    No full text
    <p>Release for new basic submit script.</p> <p>Features:</p> <ul> <li>90c0913: Rotate err and out files so they don't grow forever </li> <li>#291: Basic submit script for simple input/output operation</li> </ul> <p>Bugfixes:</p> <ul> <li>525f039: Pass through temporary directory env variable</li> <li>ad9992c: disallow non-nonsensical cpu auto-resize requests</li> </ul&gt

    OSG and GPUs: A tale of two use cases

    No full text
    With the increase of power and reduction of cost of GPU accelerated processors a corresponding interest in their uses in the scientific domain has spurred. OSG users are no different and they have shown an interest in accessing GPU resourcesvia their usual workload infrastructures. Grid sites that have these kinds of resources also want to make them grid available. In this talk, we discuss the software and infrastructure challenges and limitations of the OSG implementations to make GPU’s widely accessible over the grid. Two use cases are considered for this. First: IceCube, a big VO with a well-curated software stack taking advantage of GPUs with OpenCL. Second, a more general approach to supporting the grid use of industry and academia maintained machine learning libraries like Tensorflow, and Keras on the grid using Singularity
    corecore