2,275 research outputs found

    Optimizing CMS build infrastructure via Apache Mesos

    Full text link
    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuos integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.Comment: Submitted to proceedings of the 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa, Japa

    IGUANA Architecture, Framework and Toolkit for Interactive Graphics

    Full text link
    IGUANA is a generic interactive visualisation framework based on a C++ component model. It provides powerful user interface and visualisation primitives in a way that is not tied to any particular physics experiment or detector design. The article describes interactive visualisation tools built using IGUANA for the CMS and D0 experiments, as well as generic GEANT4 and GEANT3 applications. It covers features of the graphical user interfaces, 3D and 2D graphics, high-quality vector graphics output for print media, various textual, tabular and hierarchical data views, and integration with the application through control panels, a command line and different multi-threading models.Comment: Presented at the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 6 pages LaTeX, 4 eps figures. PSN MOLT008 More and higher res figs at http://iguana.web.cern.ch/iguana/snapshot/main/gallery.htm

    Optimization of the CMS software build and distribution system

    Get PDF
    CMS software consists of over two million lines of code actively developed by hundreds of developers from all around the world. Optimal build, release and distribution of such a large-scale system for production and analysis activities for hundreds of sites and multiple platforms are quite a challenge. Its dependency on more than one hundred external tools makes its build and distribution more complex. We describe how parallel building of the software and minimalizing the size of the distribution dramatically reduced the time gap between software build and installation on remote sites, and how producing few big binary products, instead of thousands small ones, helped finding out some integration and runtime issues of the software

    HEP C++ Meets reality

    Get PDF
    In 2007 the CMS experiment first reported some initial findings on the impedance mismatch between HEP use of C++ and the current generation of compilers and CPUs. Since then we have continued our analysis of the CMS experiment code base, including the external packages we use. We have found that large amounts of C++ code has been written largely ignoring any physical reality of the resulting machine code and run time execution costs, including and especially software developed by experts. We report on a wide range issues affecting typical high energy physics code, in the form of coding pattern - impact - lesson - improvement

    Explorations of the viability of ARM and Xeon Phi for physics processing

    Full text link
    We report on our investigations into the viability of the ARM processor and the Intel Xeon Phi co-processor for scientific computing. We describe our experience porting software to these processors and running benchmarks using real physics applications to explore the potential of these processors for production physics processing.Comment: Submitted to proceedings of the 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP13), Amsterda

    CMS Monte Carlo production in the WLCG computing Grid

    Get PDF
    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day
    • …
    corecore