13 research outputs found

    Dynamic extensions of batch systems with cloud resources

    Get PDF
    Compute clusters use Portable Batch Systems (PBS) to distribute workload among individual cluster machines. To extend standard batch systems to Cloud infrastructures, a new service monitors the number of queued jobs and keeps track of the price of available resources. This meta-scheduler dynamically adapts the number of Cloud worker nodes according to the requirement profile. Two different worker node topologies are presented and tested on the Amazon EC2 Cloud service

    CMS Software Distribution on the LCG and OSG Grids

    Full text link
    The efficient exploitation of worldwide distributed storage and computing resources available in the grids require a robust, transparent and fast deployment of experiment specific software. The approach followed by the CMS experiment at CERN in order to enable Monte-Carlo simulations, data analysis and software development in an international collaboration is presented. The current status and future improvement plans are described.Comment: 4 pages, 1 figure, latex with hyperref

    Combination of electroweak and QCD corrections to single W production at the Fermilab Tevatron and the CERN LHC

    Full text link
    Precision studies of the production of a high-transverse momentum lepton in association with missing energy at hadron colliders require that electroweak and QCD higher-order contributions are simultaneously taken into account in theoretical predictions and data analysis. Here we present a detailed phenomenological study of the impact of electroweak and strong contributions, as well as of their combination, to all the observables relevant for the various facets of the p\smartpap \to {\rm lepton} + X physics programme at hadron colliders, including luminosity monitoring and Parton Distribution Functions constraint, WW precision physics and search for new physics signals. We provide a theoretical recipe to carefully combine electroweak and strong corrections, that are mandatory in view of the challenging experimental accuracy already reached at the Fermilab Tevatron and aimed at the CERN LHC, and discuss the uncertainty inherent the combination. We conclude that the theoretical accuracy of our calculation can be conservatively estimated to be about 2% for standard event selections at the Tevatron and the LHC, and about 5% in the very high WW transverse mass/lepton transverse momentum tails. We also provide arguments for a more aggressive error estimate (about 1% and 3%, respectively) and conclude that in order to attain a one per cent accuracy: 1) exact mixed O(ααs){\cal O}(\alpha \alpha_s) corrections should be computed in addition to the already available NNLO QCD contributions and two-loop electroweak Sudakov logarithms; 2) QCD and electroweak corrections should be coherently included into a single event generator.Comment: One reference added. Final version to appear in JHE
    corecore