5,563 research outputs found

    Criticality Aware Soft Error Mitigation in the Configuration Memory of SRAM based FPGA

    Full text link
    Efficient low complexity error correcting code(ECC) is considered as an effective technique for mitigation of multi-bit upset (MBU) in the configuration memory(CM)of static random access memory (SRAM) based Field Programmable Gate Array (FPGA) devices. Traditional multi-bit ECCs have large overhead and complex decoding circuit to correct adjacent multibit error. In this work, we propose a simple multi-bit ECC which uses Secure Hash Algorithm for error detection and parity based two dimensional Erasure Product Code for error correction. Present error mitigation techniques perform error correction in the CM without considering the criticality or the execution period of the tasks allocated in different portion of CM. In most of the cases, error correction is not done in the right instant, which sometimes either suspends normal system operation or wastes hardware resources for less critical tasks. In this paper,we advocate for a dynamic priority-based hardware scheduling algorithm which chooses the tasks for error correction based on their area, execution period and criticality. The proposed method has been validated in terms of overhead due to redundant bits, error correction time and system reliabilityComment: 6 pages, 8 figures, conferenc

    Evaluating Built-in ECC of FPGA on-chip Memories for the Mitigation of Undervolting Faults

    Get PDF
    Voltage underscaling below the nominal level is an effective solution for improving energy efficiency in digital circuits, e.g., Field Programmable Gate Arrays (FPGAs). However, further undervolting below a safe voltage level and without accompanying frequency scaling leads to timing related faults, potentially undermining the energy savings. Through experimental voltage underscaling studies on commercial FPGAs, we observed that the rate of these faults exponentially increases for on-chip memories, or Block RAMs (BRAMs). To mitigate these faults, we evaluated the efficiency of the built-in Error-Correction Code (ECC) and observed that more than 90% of the faults are correctable and further 7% are detectable (but not correctable). This efficiency is the result of the single-bit type of these faults, which are then effectively covered by the Single-Error Correction and Double-Error Detection (SECDED) design of the built-in ECC. Finally, motivated by the above experimental observations, we evaluated an FPGA-based Neural Network (NN) accelerator under low-voltage operations, while built-in ECC is leveraged to mitigate undervolting faults and thus, prevent NN significant accuracy loss. In consequence, we achieve 40% of the BRAM power saving through undervolting below the minimum safe voltage level, with a negligible NN accuracy loss, thanks to the substantial fault coverage by the built-in ECC.Comment: 6 pages, 2 figure

    Efficient Decoder for Optical Transport Networks Achieving Near Capacity Performance

    Get PDF
    Today’s optical transport networks (OTNs) support a plethora of services such as video streaming, cloud computing, social networking and many more. To make such a wide assortment of services possible, a tremendous amount of data needs to be carried over the internet backbone supported by these optical transport networks. In order to cope with this increase in traffic, data rate on OTNs has increased significantly. Product codes (PC) are a class of codes that provide good coding gain at reasonable decoding complexity and, hence, have been a popular choice for OTNs in recent times. The key goal of this thesis is to implement a decoder for a Product Code (PC) on a Virtex7 Field Programmable Gate Array(FPGA). The product code of choice for this project is based on a (1023,993) BCH code as a component code. The conventional decoder for BCH codes has a computationally expensive step for finding the roots of error locator polynomial. The BCH decoder implemented as a part of this project is optimized to speed up the decoding process while at the same time also simplifying the hardware complexity of the design. The implementation is parallelized and pipelined to achieve high throughputs. This provides a hardware platform to evaluate the performance of product codes at low bit error rates that is infeasible using software simulations

    Lessons learned from the design of a mobile multimedia system in the Moby Dick project

    Get PDF
    Recent advances in wireless networking technology and the exponential development of semiconductor technology have engendered a new paradigm of computing, called personal mobile computing or ubiquitous computing. This offers a vision of the future with a much richer and more exciting set of architecture research challenges than extrapolations of the current desktop architectures. In particular, these devices will have limited battery resources, will handle diverse data types, and will operate in environments that are insecure, dynamic and which vary significantly in time and location. The research performed in the MOBY DICK project is about designing such a mobile multimedia system. This paper discusses the approach made in the MOBY DICK project to solve some of these problems, discusses its contributions, and accesses what was learned from the project

    The S2 VLBI Correlator: A Correlator for Space VLBI and Geodetic Signal Processing

    Get PDF
    We describe the design of a correlator system for ground and space-based VLBI. The correlator contains unique signal processing functions: flexible LO frequency switching for bandwidth synthesis; 1 ms dump intervals, multi-rate digital signal-processing techniques to allow correlation of signals at different sample rates; and a digital filter for very high resolution cross-power spectra. It also includes autocorrelation, tone extraction, pulsar gating, signal-statistics accumulation.Comment: 44 pages, 13 figure

    Spectrometer Scan Mechanism for Encountering Jovian Orbit Trojan Asteroids

    Get PDF
    This paper describes the design, testing, and lessons learned during the development of the Lucy Ralph (L'Ralph) Scan Mirror System (SMS), composed of the Scan Mirror Mechanism (SMM), Differential Position Sensor System (DPSS) and Mechanism Control Electronics (MCE). The L'Ralph SMS evolved from the Advanced Topographic Laser Altimeter System (ATLAS) Beam Steering Mechanism (BSM), so design comparisons will be made. Lucy is scheduled to launch in October 2021, embarking upon a 12-year mission to make close range encounters in 2025 and 2033 with seven Trojan asteroids and one main belt asteroid that are within the Jovian orbit. The L'Ralph instrument is based upon the New Horizons Ralph instrument, which is a panchromatic and color visible imager and infrared spectroscopic mapper that slewed the spacecraft for imaging. The L'Ralph SMM is to provide scanning for imaging to eliminate the need to slew the spacecraft. One purpose of this paper is to gain understanding of the reasoning behind some of the design features as compared with the ATLAS BSM. We will identify similarities and differences between the ATLAS BSM and the L'Ralph SMM that resulted from the latter's unique requirements. Another purpose of this paper is to focus upon "Lessons Learned" that came about during the development of the L'Ralph SMM and its MCE, both mechanism engineering issues and solutions as well as Ground Support Equipment (GSE) issues and solutions that came about during the validation of requirements process. At the time of this writing, the L'Ralph SMM has been flight qualified and delivered to the project
    • …
    corecore