25 research outputs found

    Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV

    Get PDF
    The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which detect neutral particles close to beam rapidity. The measured cross sections of single and mutual electromagnetic dissociation of Pb nuclei at √sNN = 2.76 TeV with neutron emission are σ_single EMD = 187.2±0.2 (stat.) +13.8−12.0 (syst.) b and σ_mutual EMD = 6.2 ± 0.1 (stat.) ±0.4 (syst.) b respectively. The experimental results are compared to the predictions from a relativistic electromagnetic dissociation model.publishedVersio

    The ALICE TPC, a large 3-dimensional tracking device with fast readout for ultra-high multiplicity events

    Get PDF
    The design, construction, and commissioning of the ALICE Time-Projection Chamber (TPC) is described. It is the main device for pattern recognition, tracking, and identification of charged particles in the ALICE experiment at the CERN LHC. The TPC is cylindrical in shape with a volume close to 90 m3 and is operated in a 0.5 T solenoidal magnetic field parallel to its axis. In this paper we describe in detail the design considerations for this detector for operation in the extreme multiplicity environment of central Pb–Pb collisions at LHC energy. The implementation of the resulting requirements into hardware (field cage, read-out chambers, electronics), infrastructure (gas and cooling system, laser-calibration system), and software led to many technical innovations which are described along with a presentation of all the major components of the detector, as currently realized. We also report on the performance achieved after completion of the first round of stand-alone calibration runs and demonstrate results close to those specified in the TPC Technical Design Report.publishedVersio

    Monitoring and calibration of the ALICE time projection chamber

    No full text
    The aim of the A Large Ion Collider Experiment (ALICE) experiment at CERN is to study the properties of the Quark–Gluon Plasma (QGP). With energies up to 5.5 A T eV for Pb+Pb collisions, the Large Hadron Collider (LHC) sets a new benchmark for heavy- ion collisions, and opens the door to a so far unexplored energy domain. A closer look at some of the physics topics of ALICE is given in Chapter 1. ALICE consists of several sub-detectors and other sub-systems. The various sub- detectors are designed for exploring different aspects of the particle production of an heavy-ion collision. Chapter 2 gives some insight into the design. The main tracking detector is the Time Projection Chamber (TPC). It has more than half million read-out channels, divided into 216 Read-out Partitions (RPs). Each RP is a separate Front-End Electronics (FEE) entity, as described in Chapter 3. A complex Detector Control System (DCS) is needed for configuration, monitoring and control. The heart of it on the RP side is a small embedded computer running the FeeServer software, providing a means for remote configuration and continuous monitoring of the FEE. Chapter 4 gives details of the implementation of this software, and also shows the performance measurements. In Chapter 5, potential improvements to the FeeServer class factorisation is discussed. Converting the electronics signals, as measured by the sub-detectors, into useful physics data is a complicated process. This is called the calibration. Every sub-detector has its unique set of calibration tasks and challenges. Chapter 6 looks into some of the aspects of calibrating the electron drift of the TPC. This discussion is continued in Chapter 7, where the concrete AliRoot framework for some of the TPC calibration tasks is described. Chapter 8 dwells on the specifics of the TPC drift velocity calibration. Finally, the status of the effort is given in Chapter 9

    Virtualised data production infrastructure for NA61/SHINE based on CernVM

    No full text
    Traditionally, the NA61/SHINE data production is performed by manually submitting jobs to the CERN batch system. An effort is now under way to migrate the data production to an automatic system, on top of a virtualised platform based on CernVM. This will make it easier to both initiate new data productions, and to utilise computing resources available outside CERN. In addition, there is a data preservation perspective. CernVM is a Linux distribution created by CERN specifically for the needs of virtual machines. Data production software and calibration data are distributed globally via the HTTP-based CernVM file system. The NA61/SHINE data production software has been adapted to run under CernVM through CernVM file system. Databases are used to keep track of the data. This will allow the system to present lists of both raw and produced data. If a new data production is needed, the privileged user may choose the data, software versions, and calibrations to be used. Finished jobs will be scanned for errors, and automatically resubmitted for processing if needed. A web-based, graphical user interface for the data production will be available. Finally, the relevant databases will be updated to reflect the freshly produced data

    Long-term preservation of analysis software environment

    No full text
    Long-term preservation of scientific data represents a challenge to experiments, especially regarding the analysis software. Preserving data is not enough, the full software and hardware environment is needed. Virtual machines (VMs) make it possible to preserve hardware in software. A complete infrastructure package has been developed for easy deployment and management of VMs, based on CERN virtual machine (CernVM). Further, a HTTP-based file system, CernVM file system (CVMFS), is used for the distribution of the software. It is possible to process data with any given software version, and a matching, regenerated VM version. A point-and-click web user interface is being developed for setting up the complete processing chain, including VM and software versions, number and type of processing nodes, and the particular type of analysis and data. This paradigm also allows for distributed cloud-computing on private and public clouds, for both legacy and contemporary experiments

    The Small Acceptance Vertex Detector of NA61/SHINE

    No full text
    Charmonium production in heavy ion collisions is considered as an important diagnostic probe for studying the phase diagram of strongly interacting matter for potential phase transitions. The interpretation of existing data from the CERN SPS is hampered by a lack of knowledge on the properties of open charm particle production in the fireball. Moreover, open charm production in heavy ion collisions by itself is poorly understood. To overcome this obstacle, the NA61/SHINE was equipped with a Small Acceptance Vertex Detector (SAVD), which is predicted to make the experiment sensitive to open charm mesons produced in A-A collisions at the SPS top energy. This paper will introduce the concept and the hardware of the SAVD. Moreover, first running experience as obtained in a commissioning run with a 150 AGeV/c Pb+Pb collision system will be reported

    Online data compression in the ALICE O2^2 facility

    No full text
    The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects of the data handling concept are partial reconstruction of raw data organized in so called time frames, and based on that information reduction of the data rate without significant loss in the physics information. A production solution for data compression has been in operation for the ALICE Time Projection Chamber (TPC) in the ALICE High Level Trigger online system since 2011. The solution is based on reconstruction of space points from raw data. These so called clusters are the input for reconstruction of particle trajectories. Clusters are stored instead of raw data after a transformation of required parameters into an optimized format and subsequent lossless data compression techniques. With this approach, a reduction of 4.4 has been achieved on average. For Run 3, not only a significantly higher reduction is required but also improvements in the implementation of the actual algorithms. The innermost operations of the processing loop effectively need to be called up to O 101110^{11} /s to cope with the data rate. This can only be achieved in a parallel scheme and makes these operations candidates for optimization. The potential of template programming and static dispatch in a polymorphic implementation has been studied as an alternative to the commonly used dynamic dispatch at runtime. In this contribution we report on the development of a specific programming technique to efficiently combine compile time and runtime domains and present results for the speedup of the algorithm
    corecore