9,838 research outputs found

    FLEXIBLE PCI EXPESS BANDWIDTH EXTENSION ON EMBEDDED DISCRETE GPU

    Get PDF
    On Intel platform that is supporting Switchable/Hybrid Graphics feature, a discrete GPU can be connected to either PCIe Graphics interface (PEG) inside CPU or PCIe controller inside PCH. The maximum link width is up to 16 lanes (x16) on PEG. But the maximum link width is up to 4 lanes (x4) per one PCIe controller on PCH. For both AMD and Nvidia graphics cards, the PCIe link width can support x8 or x16 lanes. However, if a discrete graphics card is designed behind PCH PCIe controller, the PCIe link width is limited to x4 lanes on system. SBIOS provides setup options to enable/disable certain PCIe devices, such as Intel Thunderbolt devices, WLAN, WWAN and USB3 ports. Once these PCIe devices are disabled, these free PCIe lanes can be used by discrete GPU ideally. With De-Mux PCIe switching solution, SBIOS can use GPIO to control output PCIe lanes to either Discrete GPU or other PCIe devices. So PCIe lanes can be fully utilized on discrete GPU when certain PCIe devices are disabled

    PCle GEN4 AND PCle GEN3/SATA DEVICES SWITCHING SCHEME

    Get PDF
    Disclosed is a way to support PCIe NVMe Gen4 SSD Devices in addition to maintaining support for PCIe NVMe Gen3 SSD’s and SATA and Hybrid drives like H10. Chipsets are being architected to support PCIe Gen4 interface in the CPU for higher bandwidth devices such as graphics and high speed SSDs. However, the I/O controller in the chipset maintains support for PCIe Gen3/SATA and stops short of supporting PCIe Gen4 devices

    Intra-datacenter links exploiting PCI express generation 4 interconnections

    Get PDF
    We demonstrate few-km reaches for PCIe-based optical fiber interconnections according to latency limitations, characterizing 16-Gb/s per lane Generation4 up to 10 km and confirming the Generation3 compliance of 2-km links employing suitable PCIe cards

    General purpose readout board {\pi} LUP: overview and results

    Full text link
    This work gives an overview of the PCI-Express board π\piLUP, focusing on the motivation that led to its development, the technological choices adopted and its performance. The π\piLUP card was designed by INFN and University of Bologna as a readout interface candidate to be used after the Phase-II upgrade of the Pixel Detector of the ATLAS and CMS experiments at LHC. The same team in Bologna is also responsible for the design and commissioning of the ReadOut Driver (ROD) board - currently implemented in all the four layers of the ATLAS Pixel Detector (Insertable B-Layer, B-Layer, Layer-1 and Layer-2) - and acquired in the past years expertise on the ATLAS readout chain and the problematics arising in such experiments. Although the π\piLUP was designed to fulfill a specific task, it is highly versatile and might fit a wide variety of applications, some of which will be discussed in this work. Two 7th^{th}-generation Xilinx FPGAs are mounted on the board: a Zynq-7 with an embedded dual core ARM Processor and a Kintex-7. The latter features sixteen 12.5 \,Gbps transceivers, allowing the board to interface easily to any other electronic board, either electrically and/or optically, at the current bandwidth of the experiments for LHC. Many data-transmission protocols have been tested at different speeds, results will be discussed later in this work. Two batches of π\piLUP boards have been fabricated and tested, two boards in the first batch (version 1.0) and four boards in the second batch (version 1.1), encapsulating all the patches and improvements required by the first version.Comment: 6 pages, 10 figures, 21th Real Time Conference, winner of "2018 NPSS Student Paper Award Second Prize
    • …
    corecore