117 research outputs found

    Implementation and use of a highly available and innovative IaaS solution: the Cloud Area Padovana

    Get PDF
    While in the business world the cloud paradigm is typically implemented purchasing resources and services from third party providers (e.g. Amazon), in the scientific environment there's usually the need of on-premises IaaS infrastructures which allow efficient usage of the hardware distributed among (and owned by) different scientific administrative domains. In addition, the requirement of open source adoption has led to the choice of products like OpenStack by many organizations. We describe a use case of the Italian National Institute for Nuclear Physics (INFN) which resulted in the implementation of a unique cloud service, called ’Cloud Area Padovana’, which encompasses resources spread over two different sites: the INFN Legnaro National Laboratories and the INFN Padova division. We describe how this IaaS has been implemented, which technologies have been adopted and how services have been configured in high-availability (HA) mode. We also discuss how identity and authorization management were implemented, adopting a widely accepted standard architecture based on SAML2 and OpenID: by leveraging the versatility of those standards the integration with authentication federations like IDEM was implemented. We also discuss some other innovative developments, such as a pluggable scheduler, implemented as an extension of the native OpenStack scheduler, which allows the allocation of resources according to a fair-share based model and which provides a persistent queuing mechanism for handling user requests that can not be immediately served. Tools, technologies, procedures used to install, configure, monitor, operate this cloud service are also discussed. Finally we present some examples that show how this IaaS infrastructure is being used

    The Run Control and Monitoring System of the CMS Experiment

    Get PDF
    The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control and Monitoring System (RCMS) was developed. This paper describes the architecture and the technology used to implement the RCMS, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment

    High Level Trigger Configuration and Handling of Trigger Tables in the CMS Filter Farm

    Get PDF
    The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data in 2008. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read out and assembled by an Event Builder. The High Level Trigger (HLT) employs a set of sophisticated software algorithms, to analyze the complete event information, and further reduce the accepted event rate for permanent storage and analysis. This paper describes the design and implementation of the HLT Configuration Management system. First experiences with commissioning of the HLT system are also reported

    stairs and fire

    Get PDF

    Study of the Internal Alignment of the CMS Muon Barrel Drift Chambers Using Cosmic Ray Tracks

    No full text
    This note describes the alignment studies performed on 21 Muon Barrel Drift Tube chambers for the CMS experiment, assembled in the INFN production center at the Legnaro National Laboratories. Data were collected using the cosmic ray test facility which was setup in Legnaro to test the chamber behaviour before being moved to CERN. An alignment procedure using cosmic ray tracks has been developed, allowing a measurement of the internal misalignment of the wire layers inside a chamber "superlayer" (SL) and of the relative misalignment of the 2 SL's in the r-phi bending plane. The analysis shows that the wire layers are positioned with an rms precision of about 45 micron; the relative positioning of the 2 SL's in the r-phi bending plane is meausured to have a distribution with ~ 50 micron average value and 200 micron rms

    Flexible custom designs for CMS DAQ

    No full text
    The CMS central DAQ system is built using commercial hardware (PCs and networking equipment), except for two components: the Front-end Readout Link (FRL) and the Fast Merger Module (FMM). The FRL interfaces the sub-detector specific front-end electronics to the central DAQ system in a uniform way. The FRL is a compact-PCI module with an additional PCI 64bit connector to host a Network Interface Card (NIC). On the sub-detector side, the data are written to the link using a FIFO-like protocol (SLINK64). The link uses the Low Voltage Differential Signal (LVDS) technology to transfer data with a throughput of up to 400 MBytes/s. The FMM modules collect status signals from the front-end electronics of the sub-detectors, merge and monitor them and provide the resulting signals with low latency to the first level trigger electronics. In particular, the throttling signals allow the trigger to avoid buffer overflows and data corruption in the front-end electronics when the data produced in the front-end exceeds the capacity of the DAQ system. Both cards are compact-PCI cards with a 6U form factor. They are implemented with FPGAs. The main FPGA implements the processing logic of the card and the interfaces to the variety of busses on the card. Another FPGA contains a custom compact-PCI interface for configuration, control and monitoring. The chosen technology provides flexibility to implement new features if required

    The terabit/s super-fragment builder and trigger throttling system for the compact muon solenoid experiment at CERN

    Get PDF
    The Data Acquisition System of the Compact Muon Solenoid experiment at the Large Hadron Collider reads out event fragments of an average size of 2 kilobytes from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. By providing fast feedback from any of the front-ends to the trigger, the Trigger Throttling System prevents buffer overflows in the front-end electronics due to variations in the size and rate of events or due to back-pressure from the down-stream event-building and processing. This paper reports on new performance measurements and on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major sub-detectors. The on-going commissioning of the full-scale system is discussed
    corecore