5 research outputs found

    Implementing advanced data flow and storage management solutions within a multi-VO grid site

    No full text
    Driven by the evolution of the Linear Hadron Collider (LHC) at CERN, the infrastructure of the Worldwide LHC Computing Grid undergoes rapid changes. The need to store and analyze experimental data at an ever growing rate has led to major architecture and software improvements for the optimization of data flow and access. This article presents, as a case study, the technological solutions that were recently implemented for increasing the efficiency of the data transfers and storage management for a Tier-2 grid site that supports with resources and services the ALICE, ATLAS and LHCb experiments from LHC

    HP-SEE User Forum 2012

    No full text
    This book is a collection of carefully reviewed papers presented during the HP-SEE User Forum, the meeting of the High-Performance Computing Infrastructure for South East Europe’s (HP-SEE) Research Communities, held in October 17-19, 2012, in Belgrade, Serbia. HP-SEE aims at supporting and integrating regional HPC infrastructures; implementing solutions for HPC in the region; and making HPC resources available to research communities in SEE, region, which are working in a number of scientific fields with specific needs for massively parallel execution on powerful computing resources. HP-SEE brings together research communities and HPC operators from 14 different countries and enables them to share HPC facilities, software, tools, data and research results, thus fostering collaboration and strengthening the regional and national human network; the project specifically supports research groups in the areas of computational physics, computational chemistry and the life sciences. The contributions presented in this book are organized in four main sections: computational physics; computational chemistry; the life sciences; and scientific computing and HPC operations.  

    Evaluation of multi-source downloads for FTS

    No full text
    The data transfer in the Grid at CERN (the European Organization for Nuclear Research) has seen constant improvement, be it through optimizing the existing tools, GridFTP and GFAL2 or by adding new tools such as XRootd. Unfortunately, all these have reached the maximum limit in terms of throughput. They are limited not by the network infrastructure, but by the fact that they use a single source for transfers, despite the existence of multiple replicas. In this paper we take on the challenge of evaluating the effects of using multiple sources, over the throughput, by comparing the download speed of the tools mentioned above with Aria2 and an under development version of XRootd, both supporting the use of multiple sources
    corecore