21 research outputs found

    FPGA Based Data Read-Out System of the Belle 2 Pixel Detector

    Full text link
    The upgrades of the Belle experiment and the KEKB accelerator aim to increase the data set of the experiment by the factor 50. This will be achieved by increasing the luminosity of the accelerator which requires a significant upgrade of the detector. A new pixel detector based on DEPFET technology will be installed to handle the increased reaction rate and provide better vertex resolution. One of the features of the DEPFET detector is a long integration time of 20 {\mu}s, which increases detector occupancy up to 3 %. The detector will generate about 2 GB/s of data. An FPGA-based two-level read-out system, the Data Handling Hybrid, was developed for the Belle 2 pixel detector. The system consists of 40 read-out and 8 controller modules. All modules are built in {\mu}TCA form factor using Xilinx Virtex-6 FPGA and can utilize up to 4 GB DDR3 RAM. The system was successfully tested in the beam test at DESY in January 2014. The functionality and the architecture of the Belle 2 Data Handling Hybrid system as well as the performance of the system during the beam test are presented in the paper.Comment: Transactions on Nuclear Science, Proceedings of the 19th Real Time Conference, Preprin

    Belle II Pixel Detector Commissioning and Operational Experience

    Get PDF

    Status of the BELLE II Pixel Detector

    Get PDF
    The Belle II experiment at the super KEK B-factory (SuperKEKB) in Tsukuba, Japan, has been collecting e+ee^+e^− collision data since March 2019. Operating at a record-breaking luminosity of up to 4.7×1034cm2s14.7×10^{34} cm^{−2}s^{−1}, data corresponding to 424fb1424 fb^{−1} has since been recorded. The Belle II VerteX Detector (VXD) is central to the Belle II detector and its physics program and plays a crucial role in reconstructing precise primary and decay vertices. It consists of the outer 4-layer Silicon Vertex Detector (SVD) using double sided silicon strips and the inner two-layer PiXel Detector (PXD) based on the Depleted P-channel Field Effect Transistor (DePFET) technology. The PXD DePFET structure combines signal generation and amplification within pixels with a minimum pitch of (50×55)μm2(50×55) μm^2. A high gain and a high signal-to-noise ratio allow thinning the pixels to 75μm75 μm while retaining a high pixel hit efficiency of about 9999%. As a consequence, also the material budget of the full detector is kept low at 0.21≈0.21%XX0\frac{X}{X_0} per layer in the acceptance region. This also includes contributions from the control, Analog-to-Digital Converter (ADC), and data processing Application Specific Integrated Circuits (ASICs) as well as from cooling and support structures. This article will present the experience gained from four years of operating PXD; the first full scale detector employing the DePFET technology in High Energy Physics. Overall, the PXD has met the expectations. Operating in the intense SuperKEKB environment poses many challenges that will also be discussed. The current PXD system remains incomplete with only 20 out of 40 modules having been installed. A full replacement has been constructed and is currently in its final testing stage before it will be installed into Belle II during the ongoing long shutdown that will last throughout 2023

    Intelligent FPGA Data Acquisition Framework

    No full text
    In this paper, we present the field programmable gate arrays (FPGA)-based framework intelligent FPGA data acquisition (IFDAQ), which is used for the development of DAQ systems for detectors in high-energy physics. The framework supports Xilinx FPGA and provides a collection of IP cores written in very high speed integrated circuit hardware description language, which use the common interconnect interface. The IP core library offers functionality required for the development of the full DAQ chain. The library consists of Serializer/Deserializer (SERDES)-based time-to-digital conversion channels, an interface to a multichannel 80-MS/s 10-b analog-digital conversion, data transmission, and synchronization protocol between FPGAs, event builder, and slow control. The functionality is distributed among FPGA modules built in the AMC form factor: front end and data concentrator. This modular design also helps to scale and adapt the DAQ system to the needs of the particular experiment. The first application of the IFDAQ framework is the upgrade of the read-out electronics for the drift chambers and the electromagnetic calorimeters (ECALs) of the COMPASS experiment at CERN. The framework will be presented and discussed in the context of this paper

    Overview and Future Developments of the intelligent, FPGA-based DAQ (iFDAQ) of COMPASS

    No full text
    Modern experiments in high energy physics impose great demands on reliability, efficiency, and data rate of Data Acquisition Systems (DAQ). In order to address these needs, we present a versa- tile and scalable DAQ which executes the event building task entirely in FPGA modules. In 2014, the intelligent FPGA-based DAQ (iFDAQ) was deployed at the COMPASS experiment located at the Super Proton Synchrotron (SPS) at CERN. The core of the iFDAQ is its hardware Event Builder (EB), which consists of up to nine custom designed FPGA modules complying with the μ TCA/AMC standard. The EB replaced 30 distributed online computers and around 100 PCI cards increasing compactness, scalability, reliability, and bandwidth compared to the previous system. The iFDAQ in the configuration of COMPASS provides a bandwidth of up to 500 MB/s of sustained rate. By buffering data on different levels, the system exploits the spill structure of the SPS beam and averages the maximum on-spill data rate of 1.5 GB/s over the whole SPS duty cycle. It can even handle peak data rates of 8 GB/s. Its Run Control Configuration and Readout (RCCAR) software offers native user-friendly control and monitoring tools and together with the firmware of the modules provides built-in intelligence like self-diagnostics, data consis- tency checks, and front-end error handling. From 2017, all involved point-to-point high-speed links between front-end electronics, the hardware EB, and the readout computers will be wired via a passive programmable crosspoint switch. Thus, multiple event building topologies can be configured to adapt to different system sizes and communication patterns

    Free-running data acquisition system for the AMBER experiment

    No full text
    Triggered data acquisition systems provide only limited possibilities of triggering methods. In our paper, we propose a novel approach that completely removes the hardware trigger and its logic. It introduces an innovative free-running mode instead, which provides unprecedented possibilities to physics experiments. We would like to present such system, which is being developed for the AMBER experiment at CERN. It is based on an intelligent data acquisition framework including FPGA modules and advanced software processing. The system provides a triggerless mode that allows more time for data filtering and implementation of more complex algorithms. Moreover, it utilises a custom data protocol optimized for needs of the free-running system. The filtering procedure takes place in a server farm playing the role of the highlevel trigger. For this purpose, we introduce a high-performance filtering framework providing optimized algorithms and load balancing to cope with excessive data rates. Furthermore, this paper also describes the filter pipeline as well as the simulation chain that is being used for production of artificial data, for testing, and validation

    The online monitoring API for the DIALOG library of the COMPASS experiment

    No full text
    Modern experiments demand a powerful and efficient Data Acquisition System (DAQ). The intelligent, FPGA-based Data Acquisition System (iF-DAQ) of the COMPASS experiment at CERN is composed of many processes communicating between each other. The DIALOG library covers a communication mechanism between processes and establishes a communication layer to each of them. It has been introduced to the iFDAQ in the Run 2016 and improved significantly the stability of the system. The paper presents the online monitoring API for the DIALOG library. Communication between processes is challenging from a synchronization, reliability and robustness point of view. Online monitoring tools of the communication between processes are capable to reveal communication problems to be fixed in future. The debugging purpose has been crucial during introduction period to the iFDAQ. On the other hand, based on the measurement of communication between processes, the proper load balancing of processes among machines can improve the stability of the system. The online monitoring API offers a general approach for the implementation of many monitoring tools with different purposes. In the paper, it is discussed its fundamental concept, integration to a new monitoring tool and a few examples of monitoring tools are given

    The Online Monitoring API for the DIALOG Library of the COMPASS Experiment

    No full text
    Modern experiments demand a powerful and efficient Data Acquisition System (DAQ). The intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN is composed of many processes communicating between each other. The DIALOG library covers a communication mechanism between processes and establishes a communication layer to each of them. It has been introduced to the iFDAQ in the Run 2016 and improved significantly the stability of the system. The paper presents the online monitoring API for the DIALOG library. Communication between processes is challenging from a synchronization, reliability and robustness point of view. Online monitoring tools of the communication between processes are capable to reveal communication problems to be fixed in future. The debugging purpose has been crucial during introduction period to the iFDAQ. On the other hand, based on the measurement of communication between processes, the proper load balancing of processes among machines can improve the stability of the system. The online monitoring API offers a general approach for the implementation of many monitoring tools with different purposes. In the paper, it is discussed its fundamental concept, integration to a new monitoring tool and a few examples of monitoring tools are given
    corecore