5,061 research outputs found
Using XDAQ in Application Scenarios of the CMS Experiment
XDAQ is a generic data acquisition software environment that emerged from a
rich set of of use-cases encountered in the CMS experiment. They cover not the
deployment for multiple sub-detectors and the operation of different processing
and networking equipment as well as a distributed collaboration of users with
different needs. The use of the software in various application scenarios
demonstrated the viability of the approach. We discuss two applications, the
tracker local DAQ system for front-end commissioning and the muon chamber
validation system. The description is completed by a brief overview of XDAQ.Comment: Conference CHEP 2003 (Computing in High Energy and Nuclear Physics,
La Jolla, CA
The CMS Event Builder
The data acquisition system of the CMS experiment at the Large Hadron
Collider will employ an event builder which will combine data from about 500
data sources into full events at an aggregate throughput of 100 GByte/s.
Several architectures and switch technologies have been evaluated for the DAQ
Technical Design Report by measurements with test benches and by simulation.
This paper describes studies of an EVB test-bench based on 64 PCs acting as
data sources and data consumers and employing both Gigabit Ethernet and Myrinet
technologies as the interconnect. In the case of Ethernet, protocols based on
Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies,
including measurements on throughput and scaling are presented.
The architecture of the baseline CMS event builder will be outlined. The
event builder is organised into two stages with intelligent buffers in between.
The first stage contains 64 switches performing a first level of data
concentration by building super-fragments from fragments of 8 data sources. The
second stage combines the 64 super-fragments into full events. This
architecture allows installation of the second stage of the event builder in
steps, with the overall throughput scaling linearly with the number of switches
in the second stage. Possible implementations of the components of the event
builder are discussed and the expected performance of the full event builder is
outlined.Comment: Conference CHEP0
LHC Communication Infrastructure: Recommendations from the working group
The LHC Working Group for Communication Infrastructure (CIWG) was established in May 1999 with members from the accelerator sector, the LHC physics experiments, the general communication services, the technical services and other LHC working groups. It has spent a year collecting user requirements and at the same time explored and evaluated possible solutions appropriate to the LHC. A number of technical recommendations were agreed, and areas where more work is required were identified. The working group also put forward proposals for organizational changes needed to allow the design project to continue and to prepare for the installation and commissioning phase of the LHC communication infrastructure. This paper reports on the work done and explains the motivation behind the recommendations
The CMS event builder demonstrator based on Myrinet
The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been set up to study a small-scale (8*8) event builder based on a Myrinet switch. Measurements are presented on throughput, overhead and scaling for various traffic conditions. Results are shown on event building with a push architecture. (6 refs)
A software approach for readout and data acquisition in CMS
Traditional systems dominated by performance constraints tend to neglect other qualities such as maintainability and configurability. Object-Orientation allows one to encapsulate the technology differences in communication sub-systems and to provide a uniform view of data transport layer to the systems engineer. We applied this paradigm to the design and implementation of intelligent data servers in the Compact Muon Solenoid (CMS) data acquisition system at CERN to easily exploiting the physical communication resources of the available equipment. CMS is a high-energy physics experiment under study that incorporates a highly distributed data acquisition system. This paper outlines the architecture of one part, the so called Readout Unit, and shows how we can exploit the object advantage for systems with specific data rate requirements. A C++ streams communication layer with zero copying functionality has been established for UDP, TCP, DLPI and specific Myrinet and VME bus communication on the VxWorks real-time operating system. This software provides performance close to the hardware channel and hides communication details from the application programmers. (28 refs)
Commissioning of the CMS High Level Trigger
The CMS experiment will collect data from the proton-proton collisions
delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to
14 TeV. The CMS trigger system is designed to cope with unprecedented
luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger
architecture only employs two trigger levels. The Level-1 trigger is
implemented using custom electronics, while the High Level Trigger (HLT) is
based on software algorithms running on a large cluster of commercial
processors, the Event Filter Farm. We present the major functionalities of the
CMS High Level Trigger system as of the starting of LHC beams operations in
September 2008. The validation of the HLT system in the online environment with
Monte Carlo simulated data and its commissioning during cosmic rays data taking
campaigns are discussed in detail. We conclude with the description of the HLT
operations with the first circulating LHC beams before the incident occurred
the 19th September 2008
Effects of Veliparib on Microglial Activation and Functional Outcomes after Traumatic Brain Injury in the Rat and Pig.
The inflammation response induced by brain trauma can impair recovery. This response requires several hours to develop fully and thus provides a clinically relevant therapeutic window of opportunity. Poly(ADP-ribose) polymerase inhibitors suppress inflammatory responses, including brain microglial activation. We evaluated delayed treatment with veliparib, a poly(ADP-ribose) polymerase inhibitor, currently in clinical trials as a cancer therapeutic, in rats and pigs subjected to controlled cortical impact (CCI). In rats, CCI induced a robust inflammatory response at the lesion margins, scattered cell death in the dentate gyrus, and a delayed, progressive loss of corpus callosum axons. Pre-determined measures of cognitive and motor function showed evidence of attentional deficits that resolved after three weeks and motor deficits that recovered only partially over eight weeks. Veliparib was administered beginning 2 or 24 h after CCI and continued for up to 12 days. Veliparib suppressed CCI-induced microglial activation at doses of 3 mg/kg or higher and reduced reactive astrocytosis and cell death in the dentate gyrus, but had no significant effect on delayed axonal loss or functional recovery. In pigs, CCI similarly induced a perilesional microglial activation that was attenuated by veliparib. CCI in the pig did not, however, induce detectable persisting cognitive or motor impairment. Our results showed veliparib suppression of CCI-induced microglial activation with a delay-to-treatment interval of at least 24 h in both rats and pigs, but with no associated functional improvement. The lack of improvement in long-term recovery underscores the complexities in translating anti-inflammatory effects to clinically relevant outcomes
- …