5,769 research outputs found
Using XDAQ in Application Scenarios of the CMS Experiment
XDAQ is a generic data acquisition software environment that emerged from a
rich set of of use-cases encountered in the CMS experiment. They cover not the
deployment for multiple sub-detectors and the operation of different processing
and networking equipment as well as a distributed collaboration of users with
different needs. The use of the software in various application scenarios
demonstrated the viability of the approach. We discuss two applications, the
tracker local DAQ system for front-end commissioning and the muon chamber
validation system. The description is completed by a brief overview of XDAQ.Comment: Conference CHEP 2003 (Computing in High Energy and Nuclear Physics,
La Jolla, CA
The CMS Event Builder
The data acquisition system of the CMS experiment at the Large Hadron
Collider will employ an event builder which will combine data from about 500
data sources into full events at an aggregate throughput of 100 GByte/s.
Several architectures and switch technologies have been evaluated for the DAQ
Technical Design Report by measurements with test benches and by simulation.
This paper describes studies of an EVB test-bench based on 64 PCs acting as
data sources and data consumers and employing both Gigabit Ethernet and Myrinet
technologies as the interconnect. In the case of Ethernet, protocols based on
Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies,
including measurements on throughput and scaling are presented.
The architecture of the baseline CMS event builder will be outlined. The
event builder is organised into two stages with intelligent buffers in between.
The first stage contains 64 switches performing a first level of data
concentration by building super-fragments from fragments of 8 data sources. The
second stage combines the 64 super-fragments into full events. This
architecture allows installation of the second stage of the event builder in
steps, with the overall throughput scaling linearly with the number of switches
in the second stage. Possible implementations of the components of the event
builder are discussed and the expected performance of the full event builder is
outlined.Comment: Conference CHEP0
LHC Communication Infrastructure: Recommendations from the working group
The LHC Working Group for Communication Infrastructure (CIWG) was established in May 1999 with members from the accelerator sector, the LHC physics experiments, the general communication services, the technical services and other LHC working groups. It has spent a year collecting user requirements and at the same time explored and evaluated possible solutions appropriate to the LHC. A number of technical recommendations were agreed, and areas where more work is required were identified. The working group also put forward proposals for organizational changes needed to allow the design project to continue and to prepare for the installation and commissioning phase of the LHC communication infrastructure. This paper reports on the work done and explains the motivation behind the recommendations
The CMS event builder demonstrator based on Myrinet
The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been set up to study a small-scale (8*8) event builder based on a Myrinet switch. Measurements are presented on throughput, overhead and scaling for various traffic conditions. Results are shown on event building with a push architecture. (6 refs)
A software approach for readout and data acquisition in CMS
Traditional systems dominated by performance constraints tend to neglect other qualities such as maintainability and configurability. Object-Orientation allows one to encapsulate the technology differences in communication sub-systems and to provide a uniform view of data transport layer to the systems engineer. We applied this paradigm to the design and implementation of intelligent data servers in the Compact Muon Solenoid (CMS) data acquisition system at CERN to easily exploiting the physical communication resources of the available equipment. CMS is a high-energy physics experiment under study that incorporates a highly distributed data acquisition system. This paper outlines the architecture of one part, the so called Readout Unit, and shows how we can exploit the object advantage for systems with specific data rate requirements. A C++ streams communication layer with zero copying functionality has been established for UDP, TCP, DLPI and specific Myrinet and VME bus communication on the VxWorks real-time operating system. This software provides performance close to the hardware channel and hides communication details from the application programmers. (28 refs)
Commissioning of the CMS High Level Trigger
The CMS experiment will collect data from the proton-proton collisions
delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to
14 TeV. The CMS trigger system is designed to cope with unprecedented
luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger
architecture only employs two trigger levels. The Level-1 trigger is
implemented using custom electronics, while the High Level Trigger (HLT) is
based on software algorithms running on a large cluster of commercial
processors, the Event Filter Farm. We present the major functionalities of the
CMS High Level Trigger system as of the starting of LHC beams operations in
September 2008. The validation of the HLT system in the online environment with
Monte Carlo simulated data and its commissioning during cosmic rays data taking
campaigns are discussed in detail. We conclude with the description of the HLT
operations with the first circulating LHC beams before the incident occurred
the 19th September 2008
Search for the standard model Higgs boson in the H to ZZ to 2l 2nu channel in pp collisions at sqrt(s) = 7 TeV
A search for the standard model Higgs boson in the H to ZZ to 2l 2nu decay
channel, where l = e or mu, in pp collisions at a center-of-mass energy of 7
TeV is presented. The data were collected at the LHC, with the CMS detector,
and correspond to an integrated luminosity of 4.6 inverse femtobarns. No
significant excess is observed above the background expectation, and upper
limits are set on the Higgs boson production cross section. The presence of the
standard model Higgs boson with a mass in the 270-440 GeV range is excluded at
95% confidence level.Comment: Submitted to JHE
Measurement of the t t-bar production cross section in the dilepton channel in pp collisions at sqrt(s) = 7 TeV
The t t-bar production cross section (sigma[t t-bar]) is measured in
proton-proton collisions at sqrt(s) = 7 TeV in data collected by the CMS
experiment, corresponding to an integrated luminosity of 2.3 inverse
femtobarns. The measurement is performed in events with two leptons (electrons
or muons) in the final state, at least two jets identified as jets originating
from b quarks, and the presence of an imbalance in transverse momentum. The
measured value of sigma[t t-bar] for a top-quark mass of 172.5 GeV is 161.9 +/-
2.5 (stat.) +5.1/-5.0 (syst.) +/- 3.6(lumi.) pb, consistent with the prediction
of the standard model.Comment: Replaced with published version. Included journal reference and DO
- …