20 research outputs found

    Grid Interoperation with ARC Middleware for the CMS Experiment

    Get PDF
    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software

    CMS Analysis Operations

    No full text
    During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since the beginning of 2008 there have been more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centres. Supporting a globally distributed community of users on a globally distributed set of computing clusters is a task that requires reconsidering the normal methods of user support for Analysis Operations. In 2008 CMS formed an Analysis Support Task Force in preparation for large-scale physics analysis activities. The charge of the task force was to evaluate the available support tools, the user support techniques, and the direct feedback of users with the goal of improving the success rate and user experience when utilizing the distributed computing environment. The task force determined the tools needed to assess and reduce the number of non-zero exit code applications submitted through the grid interfaces and worked with the CMS experiment dashboard developers to obtain the necessary information to quickly and proactively identify issues with user jobs and data sets hosted at various sites. Results of the analysis group surveys were compiled. Reference platforms for testing and debugging problems were established in various geographic regions. The task force also assessed the resources needed to make the transition to a permanent Analysis Operations task. In this presentation the results of the task force will be discussed as well as the CMS Analysis Operations plans for the start of data taking

    The CMS data transfer test environment in preparation for LHC data taking

    No full text
    The CMS experiment is preparing for LHC data taking in several computing preparation activities. In distributed data transfer tests, in early 2007 a traffic load generator infras-tructure was designed and deployed, to equip the WLCG Tiers which support the CMS Virtual Organization with a means for debugging, load-testing and commissioning data transfer routes among CMS Computing Centres. The LoadTest is based upon PhEDEx as a reliable, scalable dataset replication system. In addition, a Debugging Data Transfers (DDT) Task Force was created to coordinate the debugging of data transfer links in the preparation period and during the Computing Software and Analysis challenge in 2007 (CSA07). The task force aimed to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are ach eved for that link. The goal was to deliver one by one working transfer routes to Data Operations. The experiences with the overall test trans-fers infrastructure within computing challenges - as in the WLCG Common-VO Computing Readiness Challenge (CCRC'08) - as well as in daily testing and debugging activities are reviewed and discussed, and plans for the future are presented. ©2008 IEEE.SCOPUS: cp.pinfo:eu-repo/semantics/publishe

    Distributed Analysis in CMS

    No full text
    The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities

    Transverse momentum and pseudorapidity distributions of charged hadrons in pp collisions at (s)\sqrt(s) = 0.9 and 2.36 TeV

    Get PDF
    Measurements of inclusive charged-hadron transverse-momentum and pseudorapidity distributions are presented for proton-proton collisions at sqrt(s) = 0.9 and 2.36 TeV. The data were collected with the CMS detector during the LHC commissioning in December 2009. For non-single-diffractive interactions, the average charged-hadron transverse momentum is measured to be 0.46 +/- 0.01 (stat.) +/- 0.01 (syst.) GeV/c at 0.9 TeV and 0.50 +/- 0.01 (stat.) +/- 0.01 (syst.) GeV/c at 2.36 TeV, for pseudorapidities between -2.4 and +2.4. At these energies, the measured pseudorapidity densities in the central region, dN(charged)/d(eta) for |eta| < 0.5, are 3.48 +/- 0.02 (stat.) +/- 0.13 (syst.) and 4.47 +/- 0.04 (stat.) +/- 0.16 (syst.), respectively. The results at 0.9 TeV are in agreement with previous measurements and confirm the expectation of near equal hadron production in p-pbar and pp collisions. The results at 2.36 TeV represent the highest-energy measurements at a particle collider to date

    Transverse-momentum and pseudorapidity distributions of charged hadrons in pppp collisions at s\sqrt{s} = 7 TeV

    No full text
    Charged-hadron transverse-momentum and pseudorapidity distributions in proton-proton collisions at s=7\sqrt{s} = 7~TeV are measured with the inner tracking system of the CMS detector at the LHC. The charged-hadron yield is obtained by counting the number of reconstructed hits, hit-pairs, and fully reconstructed charged-particle tracks. The combination of the three methods gives a charged-particle multiplicity per unit of pseudorapidity \dnchdeta|_{|\eta| < 0.5} = 5.78\pm 0.01\stat\pm 0.23\syst for non-single-diffractive events, higher than predicted by commonly used models. The relative increase in charged-particle multiplicity from s=0.9\sqrt{s} = 0.9 to 7~TeV is 66.1\%\pm 1.0\%\stat\pm 4.2\%\syst. The mean transverse momentum is measured to be 0.545\pm 0.005\stat\pm 0.015\syst\GeVc. The results are compared with similar measurements at lower energies.Charged-hadron transverse-momentum and pseudorapidity distributions in proton-proton collisions at sqrt(s) = 7 TeV are measured with the inner tracking system of the CMS detector at the LHC. The charged-hadron yield is obtained by counting the number of reconstructed hits, hit-pairs, and fully reconstructed charged-particle tracks. The combination of the three methods gives a charged-particle multiplicity per unit of pseudorapidity, dN(charged)/d(eta), for |eta| < 0.5, of 5.78 +/- 0.01 (stat) +/- 0.23 (syst) for non-single-diffractive events, higher than predicted by commonly used models. The relative increase in charged-particle multiplicity from sqrt(s) = 0.9 to 7 TeV is 66.1% +/- 1.0% (stat) +/- 4.2% (syst). The mean transverse momentum is measured to be 0.545 +/- 0.005 (stat) +/- 0.015 (syst) GeV/c. The results are compared with similar measurements at lower energies

    Measurement of the charge ratio of atmospheric muons with the CMS detector

    No full text
    We present a measurement of the ratio of positive to negative muon fluxes from cosmic ray interactions in the atmosphere, using data collected by the CMS detector both at ground level and in the underground experimental cavern at the CERN LHC. Muons were detected in the momentum range from 5 GeV/ c to 1 TeV/ c . The surface flux ratio is measured to be 1.2766±0.0032(stat.)±0.0032(syst.) , independent of the muon momentum, below 100 GeV/ c . This is the most precise measurement to date. At higher momenta the data are consistent with an increase of the charge ratio, in agreement with cosmic ray shower models and compatible with previous measurements by deep-underground experiments.We present a measurement of the ratio of positive to negative muon fluxes from cosmic ray interactions in the atmosphere, using data collected by the CMS detector both at ground level and in the underground experimental cavern at the CERN LHC. Muons were detected in the momentum range from 5 GeV/c to 1 TeV/c. The surface flux ratio is measured to be 1.2766 \pm 0.0032(stat.) \pm 0.0032 (syst.), independent of the muon momentum, below 100 GeV/c. This is the most precise measurement to date. At higher momenta the data are consistent with an increase of the charge ratio, in agreement with cosmic ray shower models and compatible with previous measurements by deep-underground experiments
    corecore