765 research outputs found
Performance of the Fully Digital FPGA-based Front-End Electronics for the GALILEO Array
In this work we present the architecture and results of a fully digital Front
End Electronics (FEE) read out system developed for the GALILEO array. The FEE
system, developed in collaboration with the Advanced Gamma Tracking Array
(AGATA) collaboration, is composed of three main blocks: preamplifiers,
digitizers and preprocessing electronics. The slow control system contains a
custom Linux driver, a dynamic library and a server implementing network
services. The digital processing of the data from the GALILEO germanium
detectors has demonstrated the capability to achieve an energy resolution of
1.53 per mil at an energy of 1.33 MeV.Comment: 5 pages, 6 figures, preprint version of IEEE Transactions on Nuclear
Science paper submitted for the 19th IEEE Real Time Conferenc
A multi-technique approach for investigating the colours of the “Coptic” textiles at the Museo Egizio (Torino, Italy)
Using XDAQ in Application Scenarios of the CMS Experiment
XDAQ is a generic data acquisition software environment that emerged from a
rich set of of use-cases encountered in the CMS experiment. They cover not the
deployment for multiple sub-detectors and the operation of different processing
and networking equipment as well as a distributed collaboration of users with
different needs. The use of the software in various application scenarios
demonstrated the viability of the approach. We discuss two applications, the
tracker local DAQ system for front-end commissioning and the muon chamber
validation system. The description is completed by a brief overview of XDAQ.Comment: Conference CHEP 2003 (Computing in High Energy and Nuclear Physics,
La Jolla, CA
The CMS Event Builder
The data acquisition system of the CMS experiment at the Large Hadron
Collider will employ an event builder which will combine data from about 500
data sources into full events at an aggregate throughput of 100 GByte/s.
Several architectures and switch technologies have been evaluated for the DAQ
Technical Design Report by measurements with test benches and by simulation.
This paper describes studies of an EVB test-bench based on 64 PCs acting as
data sources and data consumers and employing both Gigabit Ethernet and Myrinet
technologies as the interconnect. In the case of Ethernet, protocols based on
Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies,
including measurements on throughput and scaling are presented.
The architecture of the baseline CMS event builder will be outlined. The
event builder is organised into two stages with intelligent buffers in between.
The first stage contains 64 switches performing a first level of data
concentration by building super-fragments from fragments of 8 data sources. The
second stage combines the 64 super-fragments into full events. This
architecture allows installation of the second stage of the event builder in
steps, with the overall throughput scaling linearly with the number of switches
in the second stage. Possible implementations of the components of the event
builder are discussed and the expected performance of the full event builder is
outlined.Comment: Conference CHEP0
Performance of the CMS Cathode Strip Chambers with Cosmic Rays
The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device
in the CMS endcaps. Their performance has been evaluated using data taken
during a cosmic ray run in fall 2008. Measured noise levels are low, with the
number of noisy channels well below 1%. Coordinate resolution was measured for
all types of chambers, and fall in the range 47 microns to 243 microns. The
efficiencies for local charged track triggers, for hit and for segments
reconstruction were measured, and are above 99%. The timing resolution per
layer is approximately 5 ns
Infrastructures and Installation of the Compact Muon Solenoid Data Acquisition at CERN
At the time of this paper, all hardware elements of the CMS Data Acquisition System have been installed and commissioned both in the underground and surface areas. This paper describes in detail the infrastructures and the different steps that were necessary from the very beginning when the underground control rooms and surface building were building sites to a working system collecting data fragment from ~650 sources and sending them to surface for assembly and analysis
Performance and Operation of the CMS Electromagnetic Calorimeter
The operation and general performance of the CMS electromagnetic calorimeter
using cosmic-ray muons are described. These muons were recorded after the
closure of the CMS detector in late 2008. The calorimeter is made of lead
tungstate crystals and the overall status of the 75848 channels corresponding
to the barrel and endcap detectors is reported. The stability of crucial
operational parameters, such as high voltage, temperature and electronic noise,
is summarised and the performance of the light monitoring system is presented
The 2 Tbps "Data to Surface" System of the CMS Data Acquisition
The Data Acquisition system of the CMS experiment, at the CERN LHC collider, is designed to build 1~MB events at a sustained rate of 100 kHz and to provide sufficient computing power to filter the events by a factor of 1000. The Data to Surface (D2S) system is the first layer of the Data Acquisition interfacing the underground subdetector readout electronics to the surface Event Builder. It collects the 100~GB/s input data from a large number of front-end cards (650) , implements a first stage event building by combining multiple sources into lar ger-size data fragments, and transports them to the surface for the full event building. The Data to Surface system can operate at the maximum rate of 2 Tbps. This paper describes the layout, reconfigurability and production validation of the D2S system which is to be installed by December 2005
- …
