24 research outputs found

    Data Filtering in the readout of the CMS Electromagnetic Calorimeter

    Get PDF
    For an efficient data taking, the Electromagnetic Calorimeter (ECAL) data of the CMS experiment must be limited to 10\% of the full event size (1MB). Other requirements limit the average data size to 2kB per data acquisition link. These conditions imply a reduction factor of close to twenty on the data collected. The data filtering in the readout of the ECAL detector is discussed. Test beam data are used to study the digital filtering applied in the readout channels and a full detector simulation allows to estimate the energy thresholds to achieve the desired data suppression factor

    The CMS Electromagnetic Calorimeter Data Acquisition System at the 2006 Test Beam

    Get PDF
    The Electromagnetic Calorimeter of the CMS experiment at the CERN LHC is an homogeneous calorimeter made of about 80000 Lead Tungstate crystals. From June to November 2006, eleven barrel Supermodules (1700 crystals each) were exposed to beam at CERN SPS, both in stand-alone and in association with portions of the Hadron Calorimeter. We present the description of the system used to configure and readout the calorimeter during this period. The full set of final readout electronics boards was employed, together with the pre-series version of the data acquisition software. During this testbeam, the hardware and software concepts for the final system were validated and the successfull operation of all the ten supermodules was ensured

    CMS physics technical design report : Addendum on high density QCD with heavy ions

    Get PDF
    Peer reviewe

    Investigation of High-Level Synthesis tools’ applicability to data acquisition systems design based on the CMS ECAL Data Concentrator Card example

    No full text
    High-Level Synthesis (HLS) for Field-Programmable Logic Array (FPGA) programming is becoming a practical alternative to well-established VHDL and Verilog languages. This paper describes a case study in the use of HLS tools to design FPGA-based data acquisition systems (DAQ). We will present the implementation of the CERN CMS detector ECAL Data Concentrator Card (DCC) functionality in HLS and lessons learned from using HLS design flow.The DCC functionality and a definition of the initial system-level performance requirements (latency, bandwidth, and throughput) will be presented. We will describe how its packet processing control centric algorithm was implemented with VHDL and Verilog languages. We will then show how the HLS flow could speed up design-space exploration by providing loose coupling between functions interface design and functions algorithm implementation.We conclude with results of real-life hardware tests performed with the HLS flow-generated design with a DCC Tester system

    HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    No full text
    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment.We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability

    Investigation of High-Level Synthesis tools’ applicability to data acquisition systems design based on the CMS ECAL Data Concentrator Card example

    No full text
    High-Level Synthesis (HLS) for Field-Programmable Logic Array (FPGA) programming is becoming a practical alternative to well-established VHDL and Verilog languages. This paper describes a case study in the use of HLS tools to design FPGA-based data acquisition systems (DAQ). We will present the implementation of the CERN CMS detector ECAL Data Concentrator Card (DCC) functionality in HLS and lessons learned from using HLS design flow.The DCC functionality and a definition of the initial system-level performance requirements (latency, bandwidth, and throughput) will be presented. We will describe how its packet processing control centric algorithm was implemented with VHDL and Verilog languages. We will then show how the HLS flow could speed up design-space exploration by providing loose coupling between functions interface design and functions algorithm implementation.We conclude with results of real-life hardware tests performed with the HLS flow-generated design with a DCC Tester system

    FPGA and Embedded PC Based Module for Research and

    No full text
    This paper presents a versatile experimental and educational module, which can be used both for prototyping of embedded PC based electronic devices, and for teaching of computer engineering. The FPGA chip may be used to implement or to emulate wide range of hardware devices, while the embedded PC, able to run the Linux OS, provides an efficient environment for controlling this hardware using different techniques

    Self-service for software development projects and HPC activities

    No full text
    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware

    The CMS Barrel Calorimeter Response to Particle Beams from 2 to 350 GeV/c

    No full text
    The response of the CMS barrel calorimeter (electromagnetic plus hadronic) to hadrons, electrons and muons over a wide momentum range from 2 to 350 GeV/c has been measured. To our knowledge, this is the widest range of momenta in which any calorimeter system has been studied. These tests, carried out at the H2 beam-line at CERN, provide a wealth of information, especially at low energies. The analysis of the differences in calorimeter response to charged pions, kaons, protons and antiprotons and a detailed discussion of the underlying phenomena are presented. We also show techniques that apply corrections to the signals from the considerably different electromagnetic (EB) and hadronic (HB) barrel calorimeters in reconstructing the energies of hadrons. Above 5 GeV/c, these corrections improve the energy resolution of the combined system where the stochastic term equals 84.7±\pm1.6%\% and the constant term is 7.4±\pm0.8%\%. The corrected mean response remains constant within 1.3%\% rms
    corecore