9 research outputs found

    DataTAG Contributing to LCG-0 Pilot Startup

    Get PDF
    The DataTAG project has contributed to the creation of the middleware distribution constituting the base of the LCG-0 pilot. This distribution has demonstrated the possibility of building an EDG release based on iVDGL/VDT, integrating the GLUE schema and early components of the EDG middleware

    The Platform-as-a-Service paradigm meets ATLAS: developing an automated analysis workflow on the newly established INFN Cloud

    No full text
    The Worldwide LHC Computing Grid (WLCG) is a large-scale collaboration which gathers the computing resources of around 170 computing centres from more than 40 countries. The grid paradigm, unique to the realm of high energy physics, has successfully supported a broad variety of scientific achievements. To fulfil the requirements of new applications and to improve the long-term sustainability of the grid middleware, more versatile solutions are being investigated. Cloud computing is becoming increasingly popular among open-source and commercial players. The HEP community has also recognized the benefits of integrating cloud technologies into the legacy grid-based workflows. Since March 2021, INFN has entered the field of cloud computing establishing the INFN Cloud infrastructure. Large data centers of the INFN National Computing Center, connected to a nation-wide backbone maintained by the GARR Consortium, are gathered into a redundant and federated infrastructure. This cloud service supports scientific computing, software development and training, and serves as an extension of local computing and storage resources. Among available services, INFN Cloud administrators can create virtual machines, Docker-based deployments or Kubernetes clusters. These options allow the creation of customized environments, both for individual users and for scientific collaborations. This study investigates the feasibility of an automated, cloud-based data analysis workflow for the ATLAS experiment using INFN Cloud resources. The concept is designed as a Platform-as-a-Service (PaaS) solution, based on a Centos 7 Docker image. The customized image is responsible for the provisioning of CERN’s CVMFS and EOS shared filesystems, from which a standardized ATLAS environment can be loaded. The end user’s only responsibility is to provide a working application capable of retrieving and analysing data, and to export the results to a persistent storage. The analysis code can be sourced either from remote git repositories or from a local Docker bind mount. As a final step in the automation workflow, a Kubernetes cluster will be configured within the INFN Cloud infrastructure to allow dynamic resource allocation and the interoperability with batch systems, such as HTCondor, will be investigated

    The Platform-as-a-Service paradigm meets ATLAS: developing an automated analysis workflow on the newly established INFN CLOUD

    No full text
    The Worldwide LHC Computing Grid (WLCG) is a large-scale collaboration which gathers computing resources from more than 170 computing centers worldwide. To fulfill the requirements of new applications and to improve the long-term sustainability of the grid middleware, newly available solutions are being investigated. Like open-source and commercial players, the HEP community has also recognized the benefits of integrating cloud technologies into the legacy, grid-based workflows. Since March 2021, INFN has entered the field of cloud computing establishing the INFN CLOUD infrastructure. This platform supports scientific computing, software development and training, and serves as an extension of local resources. Among available services, virtual machines, Docker-based deployments, HTCondor (deployed on Kubernetes) or general-purpose Kubernetes clusters can be deployed. An ongoing R&D activity within the ATLAS experiment has the long-term objective to define an operation model which is efficient, versatile and scalable in terms of costs and computing power. As a part of this larger effort, this study investigates the feasibility of an automated, cloud-based data analysis workflow for the ATLAS experiment using INFN CLOUD resources. The scope of this research has been defined in a new INFN R&D project: the INfn Cloud based Atlas aNalysis facility, or INCANT. The long-term objective of INCANT is to provide a cloud-based system to support data preparation and data analysis. As a first project milestone, a proof-of-concept has been developed. A Kubernetes cluster equipped with 7 nodes (total 28 vCPU, 56 GB of RAM and 700 GB of non-shared block storage) hosts an HTCondor cluster, federated with INFN's IAM authentication platform, running in specialized Kubernetes pods. HTCondor worker nodes have direct access to CVMFS and EOS (via XRootD) for provisioning software and data, respectively. They are also connected to a NFS shared drive which can optionally be backed by an S3-compatible 2 TB storage. Jobs are submitted to the HTCondor cluster from a satellite, Dockerized submit node which is also federated with INFN's IAM and connected to the same data and software resources. This proof-of-concept is being tested with actual analysis workflows

    The WorldGrid transatlantic testbed: a successful example of Grid interoperability across EU and U.S. domains

    No full text
    The European DataTAG project has taken a major step towards making the concept of a worldwide computing Grid a reality. In collaboration with the companion U.S. project iVDGL, DataTAG has realized an intercontinental testbed spanning Europe and the U.S. integrating architecturally different Grid implementations based on the Globus toolkit. The WorldGrid testbed has been successfully demonstrated at SuperComputing 2002 and IST2002 where real HEP application jobs were transparently submitted from U.S. and Europe using native mechanisms and run where resources were available, independently of their location. In this paper we describe the architecture of the WorldGrid testbed, the problems encountered and the solutions taken in realizing such a testbed. With our work we present an important step towards interoperability of Grid middleware developed and deployed in Europe and the U.S.. Some of the solutions developed in WorldGrid will be adopted by the LHC Computing Grid first service. To the best of our knowledge, this is the first large-scale testbed that combines middleware components and makes them work together.The European DataTAG project has taken a major step towards making the concept of a worldwide computing Grid a reality. In collaboration with the companion U.S. project iVDGL, DataTAG has realized an intercontinental testbed spanning Europe and the U.S. integrating architecturally different Grid implementations based on the Globus toolkit. The WorldGrid testbed has been successfully demonstrated at SuperComputing 2002 and IST2002 where real HEP application jobs were transparently submitted from U.S. and Europe using native mechanisms and run where resources were available, independently of their location. In this paper we describe the architecture of the WorldGrid testbed, the problems encountered and the solutions taken in realizing such a testbed. With our work we present an important step towards interoperability of Grid middleware developed and deployed in Europe and the U.S.. Some of the solutions developed in WorldGrid will be adopted by the LHC Computing Grid first service. To the best of our knowledge, this is the first large-scale testbed that combines middleware components and makes them work together

    Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    No full text
    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development

    apel/apel: APEL 1.10.0-0.1.rc1

    No full text
    <h2>What's Changed</h2> <ul> <li>Update to use except..as from commas and fix print by @tofu-rocketry in https://github.com/apel/apel/pull/316</li> <li>Refactor factory by @tofu-rocketry in https://github.com/apel/apel/pull/317</li> <li>Refactor records by @tofu-rocketry in https://github.com/apel/apel/pull/318</li> <li>Update pre-commit config by @tofu-rocketry in https://github.com/apel/apel/pull/329</li> <li>Dbunloader refactor by @tofu-rocketry in https://github.com/apel/apel/pull/332</li> <li>Fix mysql tests by @tofu-rocketry in https://github.com/apel/apel/pull/333</li> <li>Verbosity fix by @RedProkofiev in https://github.com/apel/apel/pull/334</li> <li>Refactor load_from_msg by @tofu-rocketry in https://github.com/apel/apel/pull/335</li> <li>Fix indentation in norm sum test so all cases run by @tofu-rocketry in https://github.com/apel/apel/pull/336</li> </ul> <h3>Changes to CI and GitHub Actions</h3> <ul> <li>Bump actions/checkout from 3 to 4 by @dependabot in https://github.com/apel/apel/pull/320</li> <li>Bump actions/upload-artifact from 3.1.2 to 3.1.3 by @dependabot in https://github.com/apel/apel/pull/321</li> </ul> <h2>New Contributors</h2> <ul> <li>@RedProkofiev made their first contribution in https://github.com/apel/apel/pull/334</li> </ul> <p><strong>Full Changelog</strong>: https://github.com/apel/apel/compare/1.9.2-1...1.10.0-0.1.rc1</p&gt

    Overview of the contributions of the LHC experiments in INFN GRID for bringing the GRID to production quality

    No full text
    The Italian groups participating to the LHC experiments with INFN funding, have given very substantial contributions in bringing the European Grid Infrastructure to the high level of reliability and efficiency that has finally been reached. This research and development work was performed by the members of the computing groups of the experiments, that coordinated themselves within the INFN GRID project. In this paper we present an overview of the Data Challenges the experiments have performed on the Grid Infrastructure with increasing complexity and involvement of sites and resources and highlight the results achieved in steering the grid middleware and operations to real maturity for use in production. The crucial role played by INFN GRID members in the experiments is also illustrated together with the working of the organisms that have allowed feedback from the computing activity of the LHC experiments to be efficiently incorporated in the grid infrastructure and the main steps and results thus achieved
    corecore