351 research outputs found

    FPGA implementation of a 10 GS/s variable-length FFT for OFDM-based optical communication systems

    Full text link
    [EN] The transmission rate in current passive optical networks can be increased by employing Orthogonal Frequency Division Multiplexing (OFDM) modulation. The computational kernel of this modulation is the fast Fourier transform (FFT) operator, which has to achieve a very high throughput in order to be used in optical networks. This paper presents the implementation in an FPGA device of a variable-length FFT that can be configured in run-time to compute different FFT lengths between 16 and 1024 points. The FFT reaches a throughput of 10 GS/s in a Virtex-7 485T-3 FPGA device and was used to implement a 20 Gb/s optical OFDM receiver. (C) 2018 Elsevier B.V. All rights reserved.This work was supported by the Spanish Ministerio de Economia y Competitividad under project TEC2015-70858-C2-2-R with FEDER funds.Bruno, JS.; Almenar Terre, V.; Valls Coquillat, J. (2019). FPGA implementation of a 10 GS/s variable-length FFT for OFDM-based optical communication systems. Microprocessors and Microsystems. 64:195-204. https://doi.org/10.1016/j.micpro.2018.12.002S1952046

    Development of FPGA controlled diagnostics on the MAST fusion reactor

    Get PDF
    Field Programmable Gate Array technology (FPGA) is very useful for implementing high performance digital signal processing algorithms, data acquisition and real-time control on nuclear fusion devices. This thesis presents the work done using FPGAs to develop powerful diagnostics. This has been achieved by developing embedded Linux and running it on the FPGA to enhance diagnostic capabilities such as remote management, PLC communications over the ModBus protocol and UDP based ethernet streaming. A closed loop real-time feedback prototype has been developed for combining laser beams onto a single beam path, for improving overall repetition rates of Thomson Scattering systems used for plasma electron temperature and density radial profile measurements. A controllable frequency sweep generator is used to drive the Toroidal Alfven Eigenmode (TAE) antenna system and results are presented indicating successful TAE resonance detection. A fast data acquisition system has been developed for the Electron Bernstein Wave (EBW) Synthetic Aperture Microwave Imaging system and an active probing microwave source where the FPGA clock rate has been pushed to the maximum. Propagation delays on the order of 2 nanoseconds in the FPGA have been finely tuned with careful placement of FPGA logic using a custom logic placement tool. Intensity interferometry results are presented on the EBW system with a suggestion for phase insensitive pitch angle measurement

    Data Visualization for Benchmarking Neural Networks in Different Hardware Platforms

    Get PDF
    The computational complexity of Convolutional Neural Networks has increased enor mously; hence numerous algorithmic optimization techniques have been widely proposed. However, in a space design so complex, it is challenging to choose which optimization will benefit from which type of hardware platform. This is why QuTiBench - a benchmarking methodology - was recently proposed, and it provides clarity into the design space. With measurements resulting in more than nine thousand data points, it became difficult to get useful and rich information quickly and intuitively from the vast data collected. Thereby this effort describes the creation of a web portal where all data is exposed and can be adequately visualized. All the code developed in this project resides in an online public GitHub repository, allowing contributions. Using visualizations which grab our interest and keep our eyes on the message is the perfect way to understand the data and spot trends. Thus, several types of plots were used: rooflines, heatmaps, line plots, bar plots and Box and Whisker Plots. Furthermore, as level-0 of QuTiBench performs a theoretical analysis of the data, with no measurements required, performance predictions were evaluated. We concluded that predictions successfully predicted performance trends. Although being somewhat optimistic because predictions become inaccurate with the increased pruning and quan tization. The theoretical analysis could be improved by the increased awareness of what data is stored in the on and off-chip memory. Moreover, for the FPGAs, performance predictions can be further enhanced by taking the actual resource utilization and the achieved clock frequency of the FPGA circuit into account. With these improvements to level-0 of QuTiBench, this benchmarking methodology can become more accurate on the next measurements, becoming more reliable and useful to designers. Moreover, more measurements were taken, in particular, power, performance and accuracy measurements were taken for Google’s USB Accelerator benchmarking Efficient Net S, EfficientNet M and EfficientNet L. In general, performance measurements were reproduced; however, it was not possible to reproduce accuracy measurements

    Gigahertz Bandwidth and Nanosecond Timescales: New Frontiers in Radio Astronomy Through Peak Performance Signal Processing

    Get PDF
    Abstract In the past decade, there has been a revolution in radio-astronomy signal processing. High bandwidth receivers coupled with fast ADCs have enabled the collection of tremendous instantaneous bandwidth, but streaming computational resources are struggling to catch up and serve these new capabilities. As a consequence, there is a need for novel signal processing algorithms capable of maximizing these resources. This thesis responds to the demand by presenting FPGA implementations of a Polyphase Filter Bank which are an order of magnitude more efficient than previous algorithms while exhibiting similar noise performance. These algorithms are showcased together alongside a broadband RF front-end in Starburst: a 5 GHz instantaneous bandwidth two-element interferometer, the first broadband digital sideband separating astronomical interferometer.  Starburst technology has been applied to three instruments to date. Abstract Wielding tremendous computational power and precisely calibrated hardware, low frequency radio telescope arrays have potential greatly exceeding their current applications.  This thesis presents new modes for low frequency radio-telescopes, dramatically extending their original capabilities.  A microsecond-scale time/frequency mode empowered the Owens Valley Long Wavelength Array to inspect not just the radio sky by enabling the testing of novel imaging techniques and detecting overhead beacon satellites, but also the terrestrial neighborhood, allowing for the characterization and mitigation of nearby sources of radio frequency interference (RFI).  This characterization led to insights prompting a nanosecond-scale observing mode to be developed, opening new avenues in high energy astrophysics, specifically related to the radio frequency detection of ultra-high energy cosmic rays and neutrinos. Abstract Measurement of the flux spectrum, composition, and origin of the highest energy cosmic ray events is a lofty goal in high energy astrophysics. One of the most powerful new windows has been the detection of associated extensive air showers at radio frequencies. However, all current ground-based systems must trigger off an expensive and insensitive external source such as particle detectors - making detection of the rare, high energy events uneconomical.  Attempts to make a direct detection in radio-only data have been unsuccessful despite numerous efforts. The problem is even more severe in the case of radio detection of ultra-high energy neutrino events, which cannot rely on in-situ particle detectors as a triggering mechanism. This thesis combines the aforementioned nanosecond-scale observing mode with real-time, on-FPGA RFI mitigation and sophisticated offline post-processing.  The resulting system has produced the first successful ground based detection of cosmic rays using only radio instruments. Design and measurements of cosmic ray detections are discussed, as well as recommendations for future cosmic ray experiments.  The presented future designs allow for another order of magnitude improvement in both sensitivity and output data-rate, paving the way for the economical ground-based detection of the highest energy neutrinos.</p

    Single-Laser Multi-Terabit/s Systems

    Get PDF
    Optical communication systems carry the bulk of all data traffic worldwide. This book introduces multi-Terabit/s transmission systems and three key technologies for next generation networks. A software-defined multi-format transmitter, an optical comb source and an optical processing scheme for the fast Fourier transform for Tbit/s signals. Three world records demonstrate the potential: The first single laser 10 Tbit/s and 26 Tbit/s OFDM and the first 32.5 Tbit/s Nyquist WDM experiments

    Analog to digital conversion in beam instrumentation systems

    Full text link
    Analog to digital conversion is a very important part of almost all beam instrumentation systems. Ideally, in a properly designed system, the used analog to digital converter (ADC) should not limit the system performance. However, despite recent improvements in ADC technology, quite often this is not possible and the choice of the ADC influences significantly or even restricts the system performance. It is therefore very important to estimate the requirements for the analog to digital conversion at an early stage of the system design and evaluate whether one can find an adequate ADC fulfilling the system specification. In case of beam instrumentation systems requiring both, high time and amplitude resolution, it often happens that the system specification cannot be met with the available ADCs without applying special processing to the analog signals prior to their digitisation. In such cases the requirements for the ADC even influence the system architecture. This paper aims at helping the designer of a beam instrumentation system in the process of selecting an ADC, which in many cases is iterative, requiring a trade off between system performance, complexity and cost. Analog to digital conversion is widely and well described in the literature, therefore this paper focusses mostly on aspects related to beam instrumentation. The ADC fundamentals are limited to the content presented as an introduction during the CAS one-hour lecture corresponding to this paper.Comment: 36 pages, contribution to the CAS - CERN Accelerator School: Beam Instrumentation, 2-15 June 2018, Tuusula, Finlan

    Single-Laser Multi-Terabit/s Systems

    Get PDF
    Optical communication systems carry the bulk of all data traffic worldwide. This book introduces multi-Terabit/s transmission systems and three key technologies for next generation networks. A software-defined multi-format transmitter, an optical comb source and an optical processing scheme for the fast Fourier transform for Tbit/s signals. Three world records demonstrate the potential: The first single laser 10 Tbit/s and 26 Tbit/s OFDM and the first 32.5 Tbit/s Nyquist WDM experiments

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin
    corecore