18 research outputs found

    CASTOR status and evolution

    Full text link
    In January 1999, CERN began to develop CASTOR ("CERN Advanced STORage manager"). This Hierarchical Storage Manager targetted at HEP applications has been in full production at CERN since May 2001. It now contains more than two Petabyte of data in roughly 9 million files. In 2002, 350 Terabytes of data were stored for COMPASS at 45 MB/s and a Data Challenge was run for ALICE in preparation for the LHC startup in 2007 and sustained a data transfer to tape of 300 MB/s for one week (180 TB). The major functionality improvements were the support for files larger than 2 GB (in collaboration with IN2P3) and the development of Grid interfaces to CASTOR: GridFTP and SRM ("Storage Resource Manager"). An ongoing effort is taking place to copy the existing data from obsolete media like 9940 A to better cost effective offerings. CASTOR has also been deployed at several HEP sites with little effort. In 2003, we plan to continue working on Grid interfaces and to improve performance not only for Central Data Recording but also for Data Analysis applications where thousands of processes possibly access the same hot data. This could imply the selection of another filesystem or the use of replication (hardware or software).Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 2 pages, PDF. PSN TUDT00

    CMS Software Distribution on the LCG and OSG Grids

    Full text link
    The efficient exploitation of worldwide distributed storage and computing resources available in the grids require a robust, transparent and fast deployment of experiment specific software. The approach followed by the CMS experiment at CERN in order to enable Monte-Carlo simulations, data analysis and software development in an international collaboration is presented. The current status and future improvement plans are described.Comment: 4 pages, 1 figure, latex with hyperref

    LCG MCDB -- a Knowledgebase of Monte Carlo Simulated Events

    Get PDF
    In this paper we report on LCG Monte Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC collaborations by experts. In many cases, the modern Monte Carlo simulation of physical processes requires expert knowledge in Monte Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project

    Massive data processing for the ATLAS combined test beam

    Get PDF

    The COMPASS Experiment at CERN

    Get PDF
    The COMPASS experiment makes use of the CERN SPS high-intensitymuon and hadron beams for the investigation of the nucleon spin structure and the spectroscopy of hadrons. One or more outgoing particles are detected in coincidence with the incoming muon or hadron. A large polarized target inside a superconducting solenoid is used for the measurements with the muon beam. Outgoing particles are detected by a two-stage, large angle and large momentum range spectrometer. The setup is built using several types of tracking detectors, according to the expected incident rate, required space resolution and the solid angle to be covered. Particle identification is achieved using a RICH counter and both hadron and electromagnetic calorimeters. The setup has been successfully operated from 2002 onwards using a muon beam. Data with a hadron beam were also collected in 2004. This article describes the main features and performances of the spectrometer in 2004; a short summary of the 2006 upgrade is also given.Comment: 84 papes, 74 figure

    A Grid architectural approach applied for backward compatibility to a production system for events simulation.

    Get PDF
    Distributed systems paradigm gained in popularity during the last 15 years, thanks also to the broad diffusion of distributed frameworks proposed for the Internet plat form. In the late ’90s a new concept started to play a main role in the field of distributed computing: the Grid. This thesis presents a study related to the integration between the BaBar’s framework, an experiment belonging to the High Energy Physics field, and a grid system like the one implemented by the Italian National Institute for Nuclear Physics (INFN), the INFNGrid project, which provides support for several research domains. The main goal was to succeed in adapt an already well established system, like the one implemented into the BaBar pipeline and based on local centers not interconnected between themselves, to a kind of technology that was not ready by the time the experiment’s framework was designed. Despite this new approach was related just to some aspects of the experiment, the production of simulated events by using MonteCarlo methods, the efforts here described represent an example of how an old experiment can bridge the gap toward the Grid computing, even adopting solutions designed for more recent projects. The complete evolution of this integration will be explained starting from the earlier stages until the actual development to state the progresses achieved, presenting results that are comparable with production rates gained using the conventional BaBar’s approach, in order to examine the potentially benefits and drawbacks on a concrete case study

    The COMPASS Setup for Physics with Hadron Beams

    Get PDF
    The main characteristics of the COMPASS experimental setup for physics with hadron beams are described. This setup was designed to perform exclusive measurements of processes with several charged and/or neutral particles in the final state. Making use of a large part of the apparatus that was previously built for spin structure studies with a muon beam, it also features a new target system as well as new or upgraded detectors. The hadron setup is able to operate at the high incident hadron flux available at CERN. It is characterised by large angular and momentum coverages, large and nearly flat acceptances, and good two and three-particle mass resolutions. In 2008 and 2009 it was successfully used with positive and negative hadron beams and with liquid hydrogen and solid nuclear targets. This paper describes the new and upgraded detectors and auxiliary equipment, outlines the reconstruction procedures used, and summarises the general performance of the setup.Peer Reviewe
    corecore