123 research outputs found

    Jet Production at HERA and Measurements of the Strong Coupling Constant

    Full text link
    Measurements of HERA that explore the parton dynamics at low Bjorken x are presented together with precise determinations of the strong coupling constant alpha_s. Calculations at next to leading order using the DGLAP evolution fail to describe the data at low x and forward jet pseudorapidities. The alpha_s measurements at HERA are in agreement with the world average and have very competitive errors.Comment: 7 pages, 9 figures, Proceedings of the Lake Louise Winter Institute 2005, Alberta, Canad

    Results from the Commissioning Run of the CMS Silicon Strip Tracker

    Get PDF
    Results of the CMS Silicon Strip Tracker performance are presented as obtained in the setups where the tracker is being commissioned.Comment: Proceedings of the 10th ICATPP Conference on Astroparticle, Particle, Space Physics, Detectors and Medical Physics Applications. 6 pages, 5 figure

    Determination of the Discovery Potential for Higgs Bosons in MSSM

    Get PDF
    The CMS and ATLAS collaborations have performed detailed studies of the discovery potential for the Higgs boson in MSSM. Different benchmarks scenarios have been studied both for the CP-conserving case and for the CP-violating one. Results are presented of the discovery potential in the parameter space of MSSM.Comment: Submitted for the SUSY07 proceedings, 4 pages, LaTeX, 8 eps figure

    Event processing time prediction at the CMS experiment of the Large Hadron Collider

    Get PDF
    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced

    Machine Learning in High Energy Physics Community White Paper

    Get PDF
    Machine learning is an important applied research area in particle physics, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas in machine learning in particle physics with a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit

    Machine Learning in High Energy Physics Community White Paper

    Get PDF
    Machine learning is an important applied research area in particle physics, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas in machine learning in particle physics with a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit

    Hadoop distributed file system for the Grid

    Get PDF
    Data distribution, storage and access are essential to CPU-intensive and data-intensive high performance Grid computing. A newly emerged file system, Hadoop distributed file system (HDFS), is deployed and tested within the Open Science Grid (OSG) middleware stack. Efforts have been taken to integrate HDFS with other Grid tools to build a complete service framework for the Storage Element (SE). Scalability tests show that sustained high inter-DataNode data transfer can be achieved for the cluster fully loaded with data-processing jobs. The WAN transfer to HDFS supported by BeStMan and tuned GridFTP servers shows large scalability and robustness of the system. The hadoop client can be deployed at interactive machines to support remote data access. The ability to automatically replicate precious data is especially important for computing sites, which is demonstrated at the Large Hadron Collider (LHC) computing centers. The simplicity of operations of HDFS-based SE significantly reduces the cost of ownership of Petabyte scale data storage over alternative solutions

    CMS distributed computing workflow experience

    Get PDF
    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation

    Tracker Operation and Performance at the Magnet Test and Cosmic Challenge

    Get PDF
    During summer 2006 a fraction of the CMS silicon strip tracker was operated in a comprehensive slice test called the Magnet Test and Cosmic Challenge (MTCC). At the MTCC, cosmic rays detected in the muon chambers were used to trigger the readout of all CMS sub-detectors in the general data acquisition system and in the presence of the 4 T magnetic field produced by the CMS superconducting solenoid. This document describes the operation of the Tracker hardware and software prior, during and after data taking. The performance of the detector as resulting from the MTCC data analysis is also presented
    corecore