6,561 research outputs found

    Running a Production Grid Site at the London e-Science Centre

    Get PDF
    This paper describes how the London e-Science Centre cluster MARS, a production 400+ Opteron CPU cluster, was integrated into the production Large Hadron Collider Compute Grid. It describes the practical issues that we encountered when deploying and maintaining this system, and details the techniques that were applied to resolve them. Finally, we provide a set of recommendations based on our experiences for grid software development in general that we believe would make the technology more accessible. © 2006 IEEE

    Scalability tests of R-GMA-based grid job monitoring system for CMS Monte Carlo data production

    Get PDF
    Copyright @ 2004 IEEEHigh-energy physics experiments, such as the compact muon solenoid (CMS) at the large hadron collider (LHC), have large-scale data processing computing requirements. The grid has been chosen as the solution. One important challenge when using the grid for large-scale data processing is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. The relational grid monitoring architecture (R-GMA) is a monitoring and information management service for distributed resources based on the GMA of the Global Grid Forum. We report on the first measurements of R-GMA as part of a monitoring architecture to be used for batch submission of multiple Monte Carlo simulation jobs running on a CMS-specific LHC computing grid test bed. Monitoring information was transferred in real time from remote execution nodes back to the submitting host and stored in a database. In scalability tests, the job submission rates supported by successive releases of R-GMA improved significantly, approaching that expected in full-scale production

    Performance of R-GMA for monitoring grid jobs for CMS data production

    Get PDF
    High energy physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through grid computing. Furthermore the production of large quantities of Monte Carlo simulated data provides an ideal test bed for grid technologies and will drive their development. One important challenge when using the grid for data analysis is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the grid monitoring architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real grid resources. We present the latest results on this system running on the LCG 2 grid test bed using the LCG 2.6.0 middleware release. For a sustained load equivalent to 7 generations of 1000 simultaneous jobs, R-GMA was able to transfer all published messages and store them in a database for 98% of the individual jobs. The failures experienced were at the remote sites, rather than at the archiver's MON box as had been expected

    Frequentist Analysis of the Parameter Space of Minimal Supergravity

    Get PDF
    We make a frequentist analysis of the parameter space of minimal supergravity (mSUGRA), in which, as well as the gaugino and scalar soft supersymmetry-breaking parameters being universal, there is a specific relation between the trilinear, bilinear and scalar supersymmetry-breaking parameters, A_0 = B_0 + m_0, and the gravitino mass is fixed by m_{3/2} = m_0. We also consider a more general model, in which the gravitino mass constraint is relaxed (the VCMSSM). We combine in the global likelihood function the experimental constraints from low-energy electroweak precision data, the anomalous magnetic moment of the muon, the lightest Higgs boson mass M_h, B physics and the astrophysical cold dark matter density, assuming that the lightest supersymmetric particle (LSP) is a neutralino. In the VCMSSM, we find a preference for values of m_{1/2} and m_0 similar to those found previously in frequentist analyses of the constrained MSSM (CMSSM) and a model with common non-universal Higgs masses (NUHM1). On the other hand, in mSUGRA we find two preferred regions: one with larger values of both m_{1/2} and m_0 than in the VCMSSM, and one with large m_0 but small m_{1/2}. We compare the probabilities of the frequentist fits in mSUGRA, the VCMSSM, the CMSSM and the NUHM1: the probability that mSUGRA is consistent with the present data is significantly less than in the other models. We also discuss the mSUGRA and VCMSSM predictions for sparticle masses and other observables, identifying potential signatures at the LHC and elsewhere.Comment: 18 pages 27 figure
    • …
    corecore