561 research outputs found

    Distributed Production Environment for Physics Data Processing

    Get PDF
    The mission of the Worldwide LHC Computing Grid (LCG) project is to build and maintain a data storage and analysis infrastructure for the entire high energy physics community that will use the LHC

    CMS Software Distribution on the LCG and OSG Grids

    Full text link
    The efficient exploitation of worldwide distributed storage and computing resources available in the grids require a robust, transparent and fast deployment of experiment specific software. The approach followed by the CMS experiment at CERN in order to enable Monte-Carlo simulations, data analysis and software development in an international collaboration is presented. The current status and future improvement plans are described.Comment: 4 pages, 1 figure, latex with hyperref

    Scalable Database Access Technologies for ATLAS Distributed Computing

    Full text link
    ATLAS event data processing requires access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are crucial for the event data reconstruction processing steps and often required for user analysis. A main focus of ATLAS database operations is on the worldwide distribution of the Conditions DB data, which are necessary for every ATLAS data processing job. Since Conditions DB access is critical for operations with real data, we have developed the system where a different technology can be used as a redundant backup. Redundant database operations infrastructure fully satisfies the requirements of ATLAS reprocessing, which has been proven on a scale of one billion database queries during two reprocessing campaigns of 0.5 PB of single-beam and cosmics data on the Grid. To collect experience and provide input for a best choice of technologies, several promising options for efficient database access in user analysis were evaluated successfully. We present ATLAS experience with scalable database access technologies and describe our approach for prevention of database access bottlenecks in a Grid computing environment.Comment: 6 pages, 7 figures. To be published in the proceedings of DPF-2009, Detroit, MI, July 2009, eConf C09072

    ATLAS Data Challenge 1

    Full text link
    In 2002 the ATLAS experiment started a series of Data Challenges (DC) of which the goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the production of those samples as a world-wide distributed activity. The first phase of DC1 was run during summer 2002, and involved 39 institutes in 18 countries. More than 10 million physics events and 30 million single particle events were fully simulated. Over a period of about 40 calendar days 71000 CPU-days were used producing 30 Tbytes of data in about 35000 partitions. In the second phase the next processing step was performed with the participation of 56 institutes in 21 countries (~ 4000 processors used in parallel). The basic elements of the ATLAS Monte Carlo production system are described. We also present how the software suite was validated and the participating sites were certified. These productions were already partly performed by using different flavours of Grid middleware at ~ 20 sites.Comment: 10 pages; 3 figures; CHEP03 Conference, San Diego; Reference MOCT00

    EU-IndiaGrid2 sustainable e-Infrastructures across Europe and India

    Get PDF
    EU-IndiaGrid2 - Sustainable e-infrastructures across Europe and India – is a project funded by European Commission under the Research Infrastructure Programme of the Information and Society Directorate General with the specific aim of promoting international interoperation between European and Indian e-Infrastructures. 2010 has been an eventful year for e-Infrastructures across Europe and India with a number of important achievements. EU-Indiagrid2, basing on the achievements of the previous EU-IndiaGrid project and on the active role of its partners was at the core of all these events that contributed significantly to the project progress towards its objectives. The present article reviews the most recent e-Infrastructures developments in India and their relationship with respect to Europe and the Asia-Pacific area

    The LHC Computing Grid (LCG)

    Get PDF

    Four Decades of Computing in Subnuclear Physics - from Bubble Chamber to LHC

    Full text link
    This manuscript addresses selected aspects of computing for the reconstruction and simulation of particle interactions in subnuclear physics. Based on personal experience with experiments at DESY and at CERN, I cover the evolution of computing hardware and software from the era of track chambers where interactions were recorded on photographic film up to the LHC experiments with their multi-million electronic channels

    Running a Production Grid Site at the London e-Science Centre

    Get PDF
    This paper describes how the London e-Science Centre cluster MARS, a production 400+ Opteron CPU cluster, was integrated into the production Large Hadron Collider Compute Grid. It describes the practical issues that we encountered when deploying and maintaining this system, and details the techniques that were applied to resolve them. Finally, we provide a set of recommendations based on our experiences for grid software development in general that we believe would make the technology more accessible. © 2006 IEEE

    CMS software deployment on OSG

    Get PDF
    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment
    • …
    corecore