2,086 research outputs found

    Micro-CernVM: Slashing the Cost of Building and Deploying Virtual Machines

    Full text link
    The traditional virtual machine building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM File System has been established in order to decouple the distribution from the experiment software from the building and distribution of the VM hard disk images. We show how to get rid of such pre-built hard disk images altogether. Due to the high requirements on POSIX compliance imposed by HEP application software, CernVM-FS can also be used to host and boot a Linux operating system. This allows the use of a tiny bootable CD image that comprises only a Linux kernel while the rest of the operating system is provided on demand by CernVM-FS. This approach speeds up the initial instantiation time and reduces virtual machine image sizes by an order of magnitude. Furthermore, security updates can be distributed instantaneously through CernVM-FS. By leveraging the fact that CernVM-FS is a versioning file system, a historic analysis environment can be easily re-spawned by selecting the corresponding CernVM-FS file system snapshot.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP) Conference, Amsterda

    CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment

    Full text link
    In a virtualized environment, contextualization is the process of configuring a VM instance for the needs of various deployment use cases. Contextualization in CernVM can be done by passing a handwritten context to the user data field of cloud APIs, when running CernVM on the cloud, or by using CernVM web interface when running the VM locally. CernVM Online is a publicly accessible web interface that unifies these two procedures. A user is able to define, store and share CernVM contexts using CernVM Online and then apply them either in a cloud by using CernVM Cloud Gateway or on a local VM with the single-step pairing mechanism. CernVM Cloud Gateway is a distributed system that provides a single interface to use multiple and different clouds (by location or type, private or public). Cloud gateway has been so far integrated with OpenNebula, CloudStack and EC2 tools interfaces. A user, with access to a number of clouds, can run CernVM cloud agents that will communicate with these clouds using their interfaces, and then use one single interface to deploy and scale CernVM clusters. CernVM clusters are defined in CernVM Online and consist of a set of CernVM instances that are contextualized and can communicate with each other.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP) Conference, Amsterda

    Performance of the Gas Gain Monitoring system of the CMS RPC muon detector and effective working point fine tuning

    Full text link
    The Gas Gain Monitoring (GGM) system of the Resistive Plate Chamber (RPC) muon detector in the Compact Muon Solenoid (CMS) experiment provides fast and accurate determination of the stability in the working point conditions due to gas mixture changes in the closed loop recirculation system. In 2011 the GGM began to operate using a feedback algorithm to control the applied voltage, in order to keep the GGM response insensitive to environmental temperature and atmospheric pressure variations. Recent results are presented on the feedback method used and on alternative algorithms

    The Upgrade of the CMS RPC System during the First LHC Long Shutdown

    Get PDF
    The CMS muon system includes in both the barrel and endcap region Resistive Plate Chambers (RPC). They mainly serve as trigger detectors and also improve the reconstruction of muon parameters. Over the years, the instantaneous luminosity of the Large Hadron Collider gradually increases. During the LHC Phase 1 (~first 10 years of operation) an ultimate luminosity is expected above its design value of 10^34/cm^2/s at 14 TeV. To prepare the machine and also the experiments for this, two long shutdown periods are scheduled for 2013-2014 and 2018-2019. The CMS Collaboration is planning several detector upgrades during these long shutdowns. In particular, the muon detection system should be able to maintain a low-pT threshold for an efficient Level-1 Muon Trigger at high particle rates. One of the measures to ensure this, is to extend the present RPC system with the addition of a 4th layer in both endcap regions. During the first long shutdown, these two new stations will be equipped in the region |eta|<1.6 with 144 High Pressure Laminate (HPL) double-gap RPCs operating in avalanche mode, with a similar design as the existing CMS endcap chambers. Here, we present the upgrade plans for the CMS RPC system for the fist long shutdown, including trigger simulation studies for the extended system, and details on the new HPL production, the chamber assembly and the quality control procedures.Comment: 9 pages, 6 figures, presented by M.Tytgat at the XI workshop on Resistive Plate Chambers and Related Detectors (RPC2012), INFN - Laboratori Nazionali di Frascati, February 5-10, 201
    • 

    corecore