1,612 research outputs found

    Bayesian Analysis

    Get PDF
    After making some general remarks, I consider two examples that illustrate the use of Bayesian Probability Theory. The first is a simple one, the physicist's favorite "toy," that provides a forum for a discussion of the key conceptual issue of Bayesian analysis: the assignment of prior probabilities. The other example illustrates the use of Bayesian ideas in the real world of experimental physics.Comment: 14 pages, 5 figures, Workshop on Confidence Limits, CERN, 17-18 January, 200

    Multivariate disriminants

    Get PDF

    Strategy for discovering a low-mass Higgs boson at the Fermilab Tevatron

    Full text link
    We have studied the potential of the CDF and DZero experiments to discover a low-mass Standard Model Higgs boson, during Run II, via the processes ppˉp\bar{p} -> WH -> νbbˉ\ell\nu b\bar{b}, ppˉp\bar{p} -> ZH -> +bbˉ\ell^{+}\ell^{-}b\bar{b} and ppˉp\bar{p} -> ZH ->ννˉbbˉ\nu \bar{\nu} b\bar{b}. We show that a multivariate analysis using neural networks, that exploits all the information contained within a set of event variables, leads to a significant reduction, with respect to {\em any} equivalent conventional analysis, in the integrated luminosity required to find a Standard Model Higgs boson in the mass range 90 GeV/c**2 < M_H < 130 GeV/c**2. The luminosity reduction is sufficient to bring the discovery of the Higgs boson within reach of the Tevatron experiments, given the anticipated integrated luminosities of Run II, whose scope has recently been expanded.Comment: 26 pages, 8 figures, 7 tables, to appear in Physical Review D, Minor fixes and revision

    Analysis Description Languages for the LHC

    Full text link
    An analysis description language is a domain specific language capable of describing the contents of an LHC analysis in a standard and unambiguous way, independent of any computing framework. It is designed for use by anyone with an interest in, and knowledge of, LHC physics, i.e., experimentalists, phenomenologists and other enthusiasts. Adopting analysis description languages would bring numerous benefits for the LHC experimental and phenomenological communities ranging from analysis preservation beyond the lifetimes of experiments or analysis software to facilitating the abstraction, design, visualization, validation, combination, reproduction, interpretation and overall communication of the analysis contents. Here, we introduce the analysis description language concept and summarize the current efforts ongoing to develop such languages and tools to use them in LHC analyses.Comment: Accepted contribution to the proceedings of The 8th Annual Conference on Large Hadron Collider Physics, LHCP2020, 25-30 May, 2020, onlin

    CMS distributed computing workflow experience

    Get PDF
    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation
    corecore