6,263 research outputs found

    Vertex reconstruction framework and its implementation for CMS

    Full text link
    The class framework developed for vertex reconstruction in CMS is described. We emphasize how we proceed to develop a flexible, efficient and reliable piece of reconstruction software. We describe the decomposition of the algorithms into logical parts, the mathematical toolkit, and the way vertex reconstruction integrates into the CMS reconstruction project ORCA. We discuss the tools that we have developed for algorithm evaluation and optimization and for code release.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, LaTeX, no figures. PSN TULT01

    Docker experience at INFN-Pisa Grid Data Center

    Get PDF
    Clouds and virtualization offer typical answers to the needs of large-scale computing centers to satisfy diverse sets of user communities in terms of architecture, OS, etc. On the other hand, solutions like Docker seems to emerge as a way to rely on Linux kernel capabilities to package only the applications and the development environment needed by the users, thus solving several resource management issues related to cloud-like solutions. In this paper, we present an exploratory (though well advanced) test done at a major Italian Tier2, at INFN-Pisa, where a considerable fraction of the resources and services has been moved to Docker. The results obtained are definitely encouraging, and Pisa is transitioning all of its Worker Nodes and services to Docker containers. Work is currently being expanded into the preparation of suitable images for a completely virtualized Tier2, with no dependency on local configurations

    Extending the distributed computing infrastructure of the CMS experiment with HPC resources

    Get PDF
    Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns

    Optimization of Italian CMS Computing Centers via MIUR funded Research Projects

    Get PDF
    In 2012, 14 Italian Institutions participating LHC Experiments (10 in CMS) have won a grant from the Italian Ministry of Research (MIUR), to optimize Analysis activities and in general the Tier2/Tier3 infrastructure. A large range of activities is actively carried on: they cover data distribution over WAN, dynamic provisioning for both scheduled and interactive processing, design and development of tools for distributed data analysis, and tests on the porting of CMS software stack to new highly performing / low power architectures

    Search for anomalous t t-bar production in the highly-boosted all-hadronic final state

    Get PDF
    A search is presented for a massive particle, generically referred to as a Z', decaying into a t t-bar pair. The search focuses on Z' resonances that are sufficiently massive to produce highly Lorentz-boosted top quarks, which yield collimated decay products that are partially or fully merged into single jets. The analysis uses new methods to analyze jet substructure, providing suppression of the non-top multijet backgrounds. The analysis is based on a data sample of proton-proton collisions at a center-of-mass energy of 7 TeV, corresponding to an integrated luminosity of 5 inverse femtobarns. Upper limits in the range of 1 pb are set on the product of the production cross section and branching fraction for a topcolor Z' modeled for several widths, as well as for a Randall--Sundrum Kaluza--Klein gluon. In addition, the results constrain any enhancement in t t-bar production beyond expectations of the standard model for t t-bar invariant masses larger than 1 TeV.Comment: Submitted to the Journal of High Energy Physics; this version includes a minor typo correction that will be submitted as an erratu

    Search for the standard model Higgs boson in the H to ZZ to 2l 2nu channel in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    A search for the standard model Higgs boson in the H to ZZ to 2l 2nu decay channel, where l = e or mu, in pp collisions at a center-of-mass energy of 7 TeV is presented. The data were collected at the LHC, with the CMS detector, and correspond to an integrated luminosity of 4.6 inverse femtobarns. No significant excess is observed above the background expectation, and upper limits are set on the Higgs boson production cross section. The presence of the standard model Higgs boson with a mass in the 270-440 GeV range is excluded at 95% confidence level.Comment: Submitted to JHE
    • …
    corecore