51 research outputs found

    LHCb inner tracker: Technical Design Report

    Get PDF

    LHCb RICH: Technical Design Report

    Get PDF

    LHCb muon system: Technical Design Report

    Get PDF

    LHCb calorimeters: Technical Design Report

    Get PDF

    LHCb magnet: Technical Design Report

    Get PDF

    CMS physics technical design report : Addendum on high density QCD with heavy ions

    Get PDF
    Peer reviewe

    Software Agents in Data and Workflow Management

    No full text
    CMS currently uses a number of tools to transfer data which, taken together, form the basis of a heterogeneous datagrid. The range of tools used, and the directed, rather than optimized nature of CMS recent large scale data challenge required the creation of a simple infrastructure that allowed a range of tools to operate in a complementary way. The system created comprises a hierarchy of simple processes (named agents) that propagate files through a number of transfer states. File locations and some application metadata were stored in POOL file catalogues, with LCG LRC or MySQL back-ends. Agents were assigned limited responsibilities, and were restricted to communicating state 9in a well-defined, indirect fashion through a central transfer management database. In this way, the task of distributing data was easily divided between different groups for implementation. The prototype system w as developed rapidly, and achieved the required sustained transfer rate of ~10 MBps, with O(10^6) files distributed to 6 sites from CERN. Experience with the system during the data challenge raised issues with underlying technology (MSS write/read, stability of the LRC, maintenance of file catalogues, synchronization of filespaces), all of which have been successfully identified and handled. The development of this prototype infrastructure allows us to plan the evolution of backbone CMS data distribution from a simple hierarchy to a more autonomous, scalable model drawing on emerging agent and grid technology

    Role of Tier-0, Tier-1 and Tier-2 Regional Centers in CMS DC04

    No full text
    The CMS 2004 Data Challenge (DC04) was devised to test several key aspects of the CMS Computing Model in three ways: by trying to sustain a 25 Hz reconstruction rate at the Tier-0; by distributing the reconstructed data to six Tier-1 Regional Centers (CNAF in Italy, FNAL in US, GridKA in Germany, IN2P3 in France, PIC in Spain, RAL in UK) and handling catalogue issues; by redistributing data to Tier-2 centers for analysis. Simulated events, up to the digitization step, were produced prior to the DC as input for the reconstruction in the Pre-Challenge Production (PCP04). In this paper, the model of the Tier-0 implementation used in DC04 is described, as well as the experience gained in using the newly developed data distribution management layer, which allowed CMS to successfully direct the distribution of data from Tier-0 to Tier-1 sites by loosely integrating a number of available Grid c omponents. While developing and testing this system, CMS explored the overall functionality and limits of each component, in any of the different implementations which were deployed within DC04. The role of Tier-1's is presented and discussed, from the import of reconstructed data from Tier-0, to the archiving on to the local mass storage system (MSS) and the data distribution management to Tier-2's for analysis. Participating Tier-1's differed in available resources, set-up and configuration: a critical evaluation of the results and performances achieved adopting different strategies in the organization and management of each Tier-1 center to support CMS DC04 is presented
    corecore