6,236 research outputs found

    Using XDAQ in Application Scenarios of the CMS Experiment

    Full text link
    XDAQ is a generic data acquisition software environment that emerged from a rich set of of use-cases encountered in the CMS experiment. They cover not the deployment for multiple sub-detectors and the operation of different processing and networking equipment as well as a distributed collaboration of users with different needs. The use of the software in various application scenarios demonstrated the viability of the approach. We discuss two applications, the tracker local DAQ system for front-end commissioning and the muon chamber validation system. The description is completed by a brief overview of XDAQ.Comment: Conference CHEP 2003 (Computing in High Energy and Nuclear Physics, La Jolla, CA

    The CMS Event Builder

    Full text link
    The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragments of 8 data sources. The second stage combines the 64 super-fragments into full events. This architecture allows installation of the second stage of the event builder in steps, with the overall throughput scaling linearly with the number of switches in the second stage. Possible implementations of the components of the event builder are discussed and the expected performance of the full event builder is outlined.Comment: Conference CHEP0

    Commissioning of the CMS High Level Trigger

    Get PDF
    The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008

    CMS DAQ Event Builder Based on Gigabit Ethernet

    Get PDF
    The CMS Data Acquisition system is designed to build and filter events originating from approximately 500 data sources from the detector at a maximum Level 1 trigger rate of 100 kHz and with an aggregate throughput of 100 GByte/s. For this purpose different architectures and switch technologies have been evaluated. Events will be built in two stages: the first stage, the FED Builder, will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The next stage, the Readout Builder, will perform the building of full events. The requirement of one Readout Builder is to build events at 12.5 kHz with average size of 16 kBytes from 64 sources. In this paper we present the prospects of a Readout Builder based on TCP/IP over Gigabit Ethernet. Various Readout Builder architectures that we are considering are discussed. The results of throughput measurements and scaling performance are outlined as well as the preliminary estimates of the final performance. All these studies have been carried out at our test-bed farms that are made up of a total of 130 dual Xeon PCs interconnected with Myrinet and Gigabit Ethernet networking and switching technologies

    Precise measurement of the W-boson mass with the CDF II detector

    Get PDF
    We have measured the W-boson mass MW using data corresponding to 2.2/fb of integrated luminosity collected in proton-antiproton collisions at 1.96 TeV with the CDF II detector at the Fermilab Tevatron collider. Samples consisting of 470126 W->enu candidates and 624708 W->munu candidates yield the measurement MW = 80387 +- 12 (stat) +- 15 (syst) = 80387 +- 19 MeV. This is the most precise measurement of the W-boson mass to date and significantly exceeds the precision of all previous measurements combined

    Transverse Beam Envelope Measurements and the Limitations of the 3-Screen Emittance Method for Space-Charge Dominated Beams

    Full text link
    In its normal mode of operation the Argonne Wakefield Accelerator Facility uses a high charge (10-100 nC), short pulse (3-5 psec) drive bunch to excite high-gradient accelerating fields in various slow-wave structures. To generate this bunch, we designed a 1.5 cell, L-band, rf photocathode gun with an emittance compensating solenoid to give optimal performance at high-charge; it has recently completed commissioning. More recently, we have begun to investigate the possibility of using this gun in a high-brightness, low-charge operating mode, with charge equal to approximately 1 nC, for high-precision measurements of wakefields. Two related measurements are reported on in this paper: (1) measurements of the transverse beam envelope are compared to predictions from the beam dynamics code PARMELA; and (2) investigations into the use of a modified 3-screen emittance measurement method that uses a beam envelope model that includes both space-charge and emittance effects. Both measurements were made for the 1 nC, 8 MeV beam in the drift region directly following the rf photocathode gun.Comment: 19 pages, 7 figure

    The 2 Tbps "Data to Surface" System of the CMS Data Acquisition

    Get PDF
    The Data Acquisition system of the CMS experiment, at the CERN LHC collider, is designed to build 1~MB events at a sustained rate of 100 kHz and to provide sufficient computing power to filter the events by a factor of 1000. The Data to Surface (D2S) system is the first layer of the Data Acquisition interfacing the underground subdetector readout electronics to the surface Event Builder. It collects the 100~GB/s input data from a large number of front-end cards (650) , implements a first stage event building by combining multiple sources into lar ger-size data fragments, and transports them to the surface for the full event building. The Data to Surface system can operate at the maximum rate of 2 Tbps. This paper describes the layout, reconfigurability and production validation of the D2S system which is to be installed by December 2005
    corecore