280 research outputs found

    An Assessment of a Model for Error Processing in the CMS Data Acquisition System

    Get PDF
    The CMS Data Acquisition System consists of O(20000) interdependent services. A system providing exception and application-specific monitoring data is essential for the operation of such a cluster. Due to the number of involved services the amount of monitoring data is higher than a human operator can handle efficiently. Thus moving the expert-knowledge for error analysis from the operator to a dedicated system is a natural choice. This reduces the number of notifications to the operator for simpler visualization and provides meaningful error cause descriptions and suggestions for possible countermeasures. This paper discusses an architecture of a workflow-based hierarchical error analysis system based on Guardians for the CMS Data Acquisition System. Guardians provide a common interface for error analysis of a specific service or subsystem. To provide effective and complete error analysis, the requirements regarding information sources, monitoring and configuration, are analyzed. Formats for common notification types are defined and a generic Guardian based on Event-Condition-Action rules is presented as a proof-of-concept

    Ice Detection and Mitigation Device

    Get PDF
    A method for deicing an aerostructure includes driving a sensing current through a heater element coated to an aerostructure, the heater element having a resistance that is temperature dependent. A resistance of the heater element is monitored. It is determined whether there is icing at the heater element using the monitored resistance of the heater element. A melting current is driven through the heater element when it is determined that there is icing at the heater element

    Consulting Project 2018/19: Manufacturing process of superconducting magnets: Analysis of manufacturing chain technologies for market-oriented industries. Report

    Get PDF
    An international consortium of more than 150 organisations worldwide is studying the feasibility of future particle collider scenarios to expand our understanding of the inner workings of the Universe. The core of this Future Circular Collider (FCC) study, hosted by CERN, an international organisation near Geneva (Switzerland), is a 100 km long circular particle collider infrastructure that extends CERN's current accelerator complex. As a first step, an intensity frontier electron-positron collider is assumed. The ultimate goal is to build a proton collider with an energy seven times larger than the Large Hadron Collider (LHC). Such a machine has to be built with novel superconductive magnet technology. Since it takes decades for such technology to reach industrial maturity levels, R&D has already started. The superconducting magnet system is considered the major cost driver for construction of such a proton collider. A good cost-benefit balance for industrial suppliers is considered an important factor for the funding of such a project. Aim The aim of this investigation was to identify the industrial impact potentials of the key processes needed for the manufacturing of novel high-field superconducting magnets and to find innovative additional applications for these technologies outside the particle-accelerator domain. Suppliers and manufacturing partners of CERN would benefit if the know-how could be used for other markets and to improve their internal efficiency and competitivity on the world-market. Eventually, being more cost-effective in the manufacturing and being able to leverage further markets on a long-time scale will also reduce the cost for each step in the manufacturing chain and ultimately lead to lower costs for the superconducting magnet system of a future high-energy particle collider. Method The project is carried out by means of the Technology Competence Leveraging method, which has been pioneered by the Vienna University of economics and business in Austria. It aims to find new application fields for the three most promising technologies required to manufacture novel high-field superconducting magnets. This is achieved by gathering information from user-communities, conducting interviews with experts in different industries and brainstorming for new out-of-the-box ideas. The most valuable application fields were evaluated according to their Benefit Relevance and Strategic Fit. During the process, 71 interviews with experts have been carried out, through which 38 new application fields were found with credible impacts beyond particle accelerator projects. They relate to manufacturing "superconducting Rutherford cables" (15), "thermal treatment" (10) and "vacuum impregnation with novel epoxy" (13). Superconducting magnet manufacturing technologies for market-oriented industries Report. Results: A short description of all application fields that were classified as "high potential" can be found here: Superconducting Rutherford cable * Aircraft charging: Commercial airplanes only spend around 45 minutes on the ground at a time to load and unload passengers. For future electric aircraft this time window would be to small to charge using conventional cables. The superconducting Rutherford cable could charge an electric plane fast and efficiently. * Electricity distribution in hybrid-electric aircraft: On a shorter time scale, hybrid-electric aircraft is an appealing ecological technology with economic advantages. In this case, electricity for the electric engines is produced by a generator. Cables with high current densities are needed inside the aircraft to distribute the energy. The superconducting Rutherford cable could be a candidate for this task. * Compact and efficient electricity generators: Using the superconducting Rutherford cable, small and light engines and generators can be constructed. One end-use example is for instance the generation of electricity using highly-efficient wind turbines. Thermal treatment: Heat treatment is needed during the production of superconducting magnet coils. In this processing step, the raw materials are reacted to form the superconductor. This processing step is used for certain lowtemperature superconductors as well as for certain high-temperature superconductors. * Scrap metal recycling: Using a large-scale oven with very accurate temperature stabilisation over long time periods, melting points of different metals can be selected. This leads to more efficient recycling of scrap metal. It also permits a higher degrees of process automation and quality management. * Thermal treatment of aluminium: Thermal treatment of aluminium comprises technologies like tempering and hardening. The goal of this technique is to change the characteristics of aluminium and alloys containing aluminium. End-use applications include for instance the automotive and aerospace industry, where such exact treatment is necessary. Vacuum impregnation * Waste treatmnent companies currently face challenges because new legislation require more leak-tight containers. Novel epoxy resin developed for superconducting magnets in particle colliders also needs to withstand high radiation levels. Therefore, this technology can be useful in the process of managing highly-activated radioactive waste

    Using XDAQ in Application Scenarios of the CMS Experiment

    Full text link
    XDAQ is a generic data acquisition software environment that emerged from a rich set of of use-cases encountered in the CMS experiment. They cover not the deployment for multiple sub-detectors and the operation of different processing and networking equipment as well as a distributed collaboration of users with different needs. The use of the software in various application scenarios demonstrated the viability of the approach. We discuss two applications, the tracker local DAQ system for front-end commissioning and the muon chamber validation system. The description is completed by a brief overview of XDAQ.Comment: Conference CHEP 2003 (Computing in High Energy and Nuclear Physics, La Jolla, CA

    The CMS Event Builder

    Full text link
    The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragments of 8 data sources. The second stage combines the 64 super-fragments into full events. This architecture allows installation of the second stage of the event builder in steps, with the overall throughput scaling linearly with the number of switches in the second stage. Possible implementations of the components of the event builder are discussed and the expected performance of the full event builder is outlined.Comment: Conference CHEP0

    The CMS event builder demonstrator based on Myrinet

    Get PDF
    The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been set up to study a small-scale (8*8) event builder based on a Myrinet switch. Measurements are presented on throughput, overhead and scaling for various traffic conditions. Results are shown on event building with a push architecture. (6 refs)

    A software approach for readout and data acquisition in CMS

    Get PDF
    Traditional systems dominated by performance constraints tend to neglect other qualities such as maintainability and configurability. Object-Orientation allows one to encapsulate the technology differences in communication sub-systems and to provide a uniform view of data transport layer to the systems engineer. We applied this paradigm to the design and implementation of intelligent data servers in the Compact Muon Solenoid (CMS) data acquisition system at CERN to easily exploiting the physical communication resources of the available equipment. CMS is a high-energy physics experiment under study that incorporates a highly distributed data acquisition system. This paper outlines the architecture of one part, the so called Readout Unit, and shows how we can exploit the object advantage for systems with specific data rate requirements. A C++ streams communication layer with zero copying functionality has been established for UDP, TCP, DLPI and specific Myrinet and VME bus communication on the VxWorks real-time operating system. This software provides performance close to the hardware channel and hides communication details from the application programmers. (28 refs)
    corecore