27 research outputs found

    Apprentice for Event Generator Tuning

    Full text link
    Apprentice is a tool developed for event generator tuning. It contains a range of conceptual improvements and extensions over the tuning tool Professor. Its core functionality remains the construction of a multivariate analytic surrogate model to computationally expensive Monte-Carlo event generator predictions. The surrogate model is used for numerical optimization in chi-square minimization and likelihood evaluation. Apprentice also introduces algorithms to automate the selection of observable weights to minimize the effect of mis-modeling in the event generators. We illustrate our improvements for the task of MC-generator tuning and limit setting.Comment: 9 pages, 2 figures, submitted to the 25th International Conference on Computing in High-Energy and Nuclear Physic

    CMS physics technical design report : Addendum on high density QCD with heavy ions

    Get PDF
    Peer reviewe

    Measurement of the mass difference m(D-s(+))-m(D+) at CDF II

    Get PDF
    We present a measurement of the mass difference m(D-s(+))-m(D+), where both the D-s(+) and D+ are reconstructed in the phipi(+) decay channel. This measurement uses 11.6 pb(-1) of data collected by CDF II using the new displaced-track trigger. The mass difference is found to be m(D-s(+))-m(D+)=99.41+/-0.38(stat)+/-0.21(syst) MeV/c(2)

    PandAna: A Python Analysis Framework for Scalable High Performance Computing in High Energy Physics

    No full text
    Modern experiments in high energy physics analyze millions of events recorded in particle detectors to select the events of interest and make measurements of physics parameters. These data can often be stored as tabular data in files with detector information and reconstructed quantities. Most current techniques for event selection in these files lack the scalability needed for high performance computing environments. We describe our work to develop a high energy physics analysis framework suitable for high performance computing. This new framework utilizes modern tools for reading files and implicit data parallelism. Framework users analyze tabular data using standard, easy-to-use data analysis techniques in Python while the framework handles the file manipulations and parallelism without the user needing advanced experience in parallel programming. In future versions, we hope to provide a framework that can be utilized on a personal computer or a high performance computing cluster with little change to the user code

    Geant4 Computing Performance Benchmarking and Monitoring

    No full text
    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples

    Lattice QCD workflows: A case study

    No full text
    Abstract-This paper discusses the application of existing workflow management systems to a real world science application (LQCD). Typical workflows and execution environment used in production are described. Requirements for the LQCD production system are discussed. The workflow management systems Askalon and Swift were tested by implementing the LQCD workflows and evaluated against the requirements. We report our findings and future work. Workflow; workflow management; lattice QC

    The novel Mechanical Ventilator Milano for the COVID-19 pandemic

    Get PDF
    This paper presents the Mechanical Ventilator Milano (MVM), a novel intensive therapy mechanical ventilator designed for rapid, large-scale, low-cost production for the COVID-19 pandemic. Free of moving mechanical parts and requiring only a source of compressed oxygen and medical air to operate, the MVM is designed to support the long-term invasive ventilation often required for COVID-19 patients and operates in pressure-regulated ventilation modes, which minimize the risk of furthering lung trauma. The MVM was extensively tested against ISO standards in the laboratory using a breathing simulator, with good agreement between input and measured breathing parameters and performing correctly in response to fault conditions and stability tests. The MVM has obtained Emergency Use Authorization by U.S. Food and Drug Administration (FDA) for use in healthcare settings during the COVID-19 pandemic and Health Canada Medical Device Authorization for Importation or Sale, under Interim Order for Use in Relation to COVID-19. Following these certifications, mass production is ongoing and distribution is under way in several countries. The MVM was designed, tested, prepared for certification, and mass produced in the space of a few months by a unique collaboration of respiratory healthcare professionals and experimental physicists, working with industrial partners, and is an excellent ventilator candidate for this pandemic anywhere in the world
    corecore