666 research outputs found

    FPGA based Novel High Speed DAQ System Design with Error Correction

    Full text link
    Present state of the art applications in the area of high energy physics experiments (HEP), radar communication, satellite communication and bio medical instrumentation require fault resilient data acquisition (DAQ) system with the data rate in the order of Gbps. In order to keep the high speed DAQ system functional in such radiation environment where direct intervention of human is not possible, a robust and error free communication system is necessary. In this work we present an efficient DAQ design and its implementation on field programmable gate array (FPGA). The proposed DAQ system supports high speed data communication (~4.8 Gbps) and achieves multi-bit error correction capabilities. BCH code (named after Raj Bose and D. K. RayChaudhuri) has been used for multi-bit error correction. The design has been implemented on Xilinx Kintex-7 board and is tested for board to board communication as well as for board to PC using PCIe (Peripheral Component Interconnect express) interface. To the best of our knowledge, the proposed FPGA based high speed DAQ system utilizing optical link and multi-bit error resiliency can be considered first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, critical path delay, efficiency and bit error rate (BER).Comment: ISVLSI 2015. arXiv admin note: substantial text overlap with arXiv:1505.04569, arXiv:1503.0881

    NaNet: a Low-Latency, Real-Time, Multi-Standard Network Interface Card with GPUDirect Features

    Full text link
    While the GPGPU paradigm is widely recognized as an effective approach to high performance computing, its adoption in low-latency, real-time systems is still in its early stages. Although GPUs typically show deterministic behaviour in terms of latency in executing computational kernels as soon as data is available in their internal memories, assessment of real-time features of a standard GPGPU system needs careful characterization of all subsystems along data stream path. The networking subsystem results in being the most critical one in terms of absolute value and fluctuations of its response latency. Our envisioned solution to this issue is NaNet, a FPGA-based PCIe Network Interface Card (NIC) design featuring a configurable and extensible set of network channels with direct access through GPUDirect to NVIDIA Fermi/Kepler GPU memories. NaNet design currently supports both standard - GbE (1000BASE-T) and 10GbE (10Base-R) - and custom - 34~Gbps APElink and 2.5~Gbps deterministic latency KM3link - channels, but its modularity allows for a straightforward inclusion of other link technologies. To avoid host OS intervention on data stream and remove a possible source of jitter, the design includes a network/transport layer offload module with cycle-accurate, upper-bound latency, supporting UDP, KM3link Time Division Multiplexing and APElink protocols. After NaNet architecture description and its latency/bandwidth characterization for all supported links, two real world use cases will be presented: the GPU-based low level trigger for the RICH detector in the NA62 experiment at CERN and the on-/off-shore data link for KM3 underwater neutrino telescope

    Optimization of multi-gigabit transceivers for high speed data communication links in HEP Experiments

    Full text link
    The scheme of the data acquisition (DAQ) architecture in High Energy Physics (HEP) experiments consist of data transport from the front-end electronics (FEE) of the online detectors to the readout units (RU), which perform online processing of the data, and then to the data storage for offline analysis. With major upgrades of the Large Hadron Collider (LHC) experiments at CERN, the data transmission rates in the DAQ systems are expected to reach a few TB/sec within the next few years. These high rates are normally associated with the increase in the high-frequency losses, which lead to distortion in the detected signal and degradation of signal integrity. To address this, we have developed an optimization technique of the multi-gigabit transceiver (MGT) and implemented it on the state-of-the-art 20nm Arria-10 FPGA manufactured by Intel Inc. The setup has been validated for three available high-speed data transmission protocols, namely, GBT, TTC-PON and 10 Gbps Ethernet. The improvement in the signal integrity is gauged by two metrics, the Bit Error Rate (BER) and the Eye Diagram. It is observed that the technique improves the signal integrity and reduces BER. The test results and the improvements in the metrics of signal integrity for different link speeds are presented and discussed

    Optimization on fixed low latency implementation of GBT protocol in FPGA

    Full text link
    In the upgrade of ATLAS experiment, the front-end electronics components are subjected to a large radiation background. Meanwhile high speed optical links are required for the data transmission between the on-detector and off-detector electronics. The GBT architecture and the Versatile Link (VL) project are designed by CERN to support the 4.8 Gbps line rate bidirectional high-speed data transmission which is called GBT link. In the ATLAS upgrade, besides the link with on-detector, the GBT link is also used between different off-detector systems. The GBTX ASIC is designed for the on-detector front-end, correspondingly for the off-detector electronics, the GBT architecture is implemented in Field Programmable Gate Arrays (FPGA). CERN launches the GBT-FPGA project to provide examples in different types of FPGA. In the ATLAS upgrade framework, the Front-End LInk eXchange (FELIX) system is used to interface the front-end electronics of several ATLAS subsystems. The GBT link is used between them, to transfer the detector data and the timing, trigger, control and monitoring information. The trigger signal distributed in the down-link from FELIX to the front-end requires a fixed and low latency. In this paper, several optimizations on the GBT-FPGA IP core are introduced, to achieve a lower fixed latency. For FELIX, a common firmware will be used to interface different front-ends with support of both GBT modes: the forward error correction mode and the wide mode. The modified GBT-FPGA core has the ability to switch between the GBT modes without FPGA reprogramming. The system clock distribution of the multi-channel FELIX firmware is also discussed in this paper

    The PCIe-based readout system for the LHCb experiment

    No full text
    International audienceThe LHCb experiment is designed to study differences between particles and anti-particles as well as very rare decays in the beauty and charm sector at the LHC. The detector will be upgraded in 2019 in order to significantly increase its efficiency, by removing the first-level hardware trigger. The upgrade experiment will implement a trigger-less readout system in which all the data from every LHC bunch-crossing are transported to the computing farm over 12000 optical links without hardware filtering. The event building and event selection are carried out entirely in the farm. Another original feature of the system is that data transmitted through these fibres arrive directly to computers through a specially designed PCIe card called PCIe40. The same board handles the data acquisition flow and the distribution of fast and slow controls to the detector front-end electronics. It embeds one of the most powerful FPGAs currently available on the market with 1.2 million logic cells. The board has a bandwidth of 480 Gbits/s in both input and output over optical links and 100 Gbits/s over the PCI Express bus to the CPU. We will present how data circulate through the board and in the PC server for achieving the event building. We will focus on specific issues regarding the design of such a board with a very large FPGA, in particular in terms of power supply dimensioning and thermal simulations. The features of the board will be detailed and we will finally present the first performance measurement

    All-Optical Programmable Disaggregated Data Centre Network realized by FPGA-based Switch and Interface Card

    Get PDF
    This paper reports an FPGA-based switch and interface card (SIC) and its application scenario in an all-optical, programmable disaggregated data center network (DCN). Our novel SIC is designed and implemented to replace traditional optical network interface cards, plugged into the server directly, supporting optical packet switching (OPS)/optical circuit switching (OCS) or time division multiplexing (TDM)/wavelength division multiplexing (WDM) traffic on demand. Placing the SIC in each server/blade, we eliminate electronics from the top of rack (ToR) switch by pushing all the functionality on each blade while enabling direct intrarack blade-to-blade communication to deliver ultralow chip-to-chip latency. We demonstrate the disaggregated DCN architecture scenarios along with all-optical dimension-programmable N Ă— M spectrum selective Switches (SSS) and an architecture-on-demand (AoD) optical backplane. OPS and OCS complement each other as do TDM and WDM, which can support variable traffic flows. A flat disaggregated DCN architecture is realized by connecting the optical ToR switches directly to either an optical top of cluster switch or the intracluster AoD optical backplane, while clusters are further interconnected to an intercluster AoD for scaling out

    Implications and Limitations of Securing an InfiniBand Network

    Get PDF
    The InfiniBand Architecture is one of the leading network interconnects used in high performance computing, delivering very high bandwidth and low latency. As the popularity of InfiniBand increases, the possibility for new InfiniBand applications arise outside the domain of high performance computing, thereby creating the opportunity for new security risks. In this work, new security questions are considered and addressed. The study demonstrates that many common traffic analyzing tools cannot monitor or capture InfiniBand traffic transmitted between two hosts. Due to the kernel bypass nature of InfiniBand, many host-based network security systems cannot be executed on InfiniBand applications. Those that can impose a significant performance loss for the network. The research concludes that not all network security practices used for Ethernet translate to InfiniBand as previously suggested and that an answer to meeting specific security requirements for an InfiniBand network might reside in hardware offload
    • …
    corecore