1,217 research outputs found

    FPGA ARCHITECTURE AND VERIFICATION OF BUILT IN SELF-TEST (BIST) FOR 32-BIT ADDER/SUBTRACTER USING DE0-NANO FPGA AND ANALOG DISCOVERY 2 HARDWARE

    Get PDF
    The integrated circuit (IC) is an integral part of everyday modern technology, and its application is very attractive to hardware and software design engineers because of its versatility, integration, power consumption, cost, and board area reduction. IC is available in various types such as Field Programming Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), System on Chip (SoC) architecture, Digital Signal Processing (DSP), microcontrollers (μC), and many more. With technology demand focused on faster, low power consumption, efficient IC application, design engineers are facing tremendous challenges in developing and testing integrated circuits that guaranty functionality, high fault coverage, and reliability as the transistor technology is shrinking to the point where manufacturing defects of ICs are affecting yield which associates with the increased cost of the part. The competitive IC market is pressuring manufactures of ICs to develop and market IC in a relatively quick turnaround which in return requires design and verification engineers to develop an integrated self-test structure that would ensure fault-free and the quality product is delivered on the market. 70-80% of IC design is spent on verification and testing to ensure high quality and reliability for the enduser. To test complex and sophisticated IC designs, the verification engineers must produce laborious and costly test fixtures which affect the cost of the part on the competitive market. To avoid increasing the part cost due to yield and test time to the end-user and to keep up with the competitive market many IC design engineers are deviating from complex external test fixture approach and are focusing on integrating Built-in Self-Test (BIST) or Design for Test (DFT) techniques onto IC’s which would reduce time to market but still guarantee high coverage for the product. Understanding the BIST, the architecture, as well as the application of IC, must be understood before developing IC. The architecture of FPGA is elaborated in this paper followed by several BIST techniques and applications of those BIST relative to FPGA, SoC, analog to digital (ADC), or digital to analog converters (DAC) that are integrated on IC. Paper is concluded with verification of BIST for the 32-bit adder/subtracter designed in Quartus II software using the Analog Discovery 2 module as stimulus and DE0-NANO FPGA board for verification

    Improving reconfigurable systems reliability by combining periodical test and redundancy techniques: a case study

    Get PDF
    This paper revises and introduces to the field of reconfigurable computer systems, some traditional techniques used in the fields of fault-tolerance and testing of digital circuits. The target area is that of on-board spacecraft electronics, as this class of application is a good candidate for the use of reconfigurable computing technology. Fault tolerant strategies are used in order for the system to adapt itself to the severe conditions found in space. In addition, the paper describes some problems and possible solutions for the use of reconfigurable components, based on programmable logic, in space applications

    Dynamic Partial Reconfiguration for Dependable Systems

    Get PDF
    Moore’s law has served as goal and motivation for consumer electronics manufacturers in the last decades. The results in terms of processing power increase in the consumer electronics devices have been mainly achieved due to cost reduction and technology shrinking. However, reducing physical geometries mainly affects the electronic devices’ dependability, making them more sensitive to soft-errors like Single Event Transient (SET) of Single Event Upset (SEU) and hard (permanent) faults, e.g. due to aging effects. Accordingly, safety critical systems often rely on the adoption of old technology nodes, even if they introduce longer design time w.r.t. consumer electronics. In fact, functional safety requirements are increasingly pushing industry in developing innovative methodologies to design high-dependable systems with the required diagnostic coverage. On the other hand commercial off-the-shelf (COTS) devices adoption began to be considered for safety-related systems due to real-time requirements, the need for the implementation of computationally hungry algorithms and lower design costs. In this field FPGA market share is constantly increased, thanks to their flexibility and low non-recurrent engineering costs, making them suitable for a set of safety critical applications with low production volumes. The works presented in this thesis tries to face new dependability issues in modern reconfigurable systems, exploiting their special features to take proper counteractions with low impacton performances, namely Dynamic Partial Reconfiguration

    Hardware, Software and Data Analysis Techniques for SRAM-Based Field Programmable Gate Array Circuits

    Get PDF
    This work presents a built, tested, and demonstrated test structure that is low-cost, flexible, and re-usable for robust radiation experimentation, primarily to investigate memory, in this case SRAMs and SRAM-based FPGAs. The space environment can induce many kinds of failures due to radiation effects. These failures result in a loss of money, time, intelligence, and information. In order to evaluate technologies for potential failures, a detailed test methodology and associated structure are required. In this solution, an FPGA board was used as the controller platform, with multiple VHDL circuit controllers, data collection and reporting modules. The structure was demonstrated by programming an SRAM-based FPGA board as the device under test (DUT) with various types of adders, counters and RAM modules. The controllers, hardware, and data collection operations were tested and validated using gamma radiation from a Co-60 source at the Ohio State University Nuclear Reactor to irradiate the DUT. The test structure is easily modified to allow for a broad range of experiments on the same DUT. In addition, this structure is easily adaptable for other memory types, such as DRAM, FlashRam, and MRAM. These additions will be discussed further in this document. The system fits in a backpack and costs less than $1000

    Degradation in FPGAs: Monitoring, Modeling and Mitigation

    Get PDF
    This dissertation targets the transistor aging degradation as well as the associated thermal challenges in FPGAs (since there is an exponential relation between aging and chip temperature). The main objectives are to perform experimentation, analysis and device-level model abstraction for modeling the degradation in FPGAs, then to monitor the FPGA to keep track of aging rates and ultimately to propose an aging-aware FPGA design flow to mitigate the aging

    Method for Testing Field Programmable Gate Arrays

    Get PDF
    A method of testing field programmable gate arrays (FPGAs) includes the step of configuring programmable logic blocks of the FPGAs for completing a built-in self-test. This is followed by the steps of initiating the built-in self-test, generating test patterns with the programmable logic blocks and analyzing a resulting response to produce a pass/fail indication with the programmable logic blocks. More specifically, the configuring step includes establishing a first group of programmable logic blocks as test pattern generators and output response analyzers and a second group of programmable logic blocks as blocks under test. The blocks under test are then repeatedly recongifured in order to completely test each block under test in all possible modes of operation. The programming of the first and second groups of programmable logic blocks is then reversed and the testing of each new block under test is then completed

    A Scalable Correlator Architecture Based on Modular FPGA Hardware, Reuseable Gateware, and Data Packetization

    Full text link
    A new generation of radio telescopes is achieving unprecedented levels of sensitivity and resolution, as well as increased agility and field-of-view, by employing high-performance digital signal processing hardware to phase and correlate large numbers of antennas. The computational demands of these imaging systems scale in proportion to BMN^2, where B is the signal bandwidth, M is the number of independent beams, and N is the number of antennas. The specifications of many new arrays lead to demands in excess of tens of PetaOps per second. To meet this challenge, we have developed a general purpose correlator architecture using standard 10-Gbit Ethernet switches to pass data between flexible hardware modules containing Field Programmable Gate Array (FPGA) chips. These chips are programmed using open-source signal processing libraries we have developed to be flexible, scalable, and chip-independent. This work reduces the time and cost of implementing a wide range of signal processing systems, with correlators foremost among them,and facilitates upgrading to new generations of processing technology. We present several correlator deployments, including a 16-antenna, 200-MHz bandwidth, 4-bit, full Stokes parameter application deployed on the Precision Array for Probing the Epoch of Reionization.Comment: Accepted to Publications of the Astronomy Society of the Pacific. 31 pages. v2: corrected typo, v3: corrected Fig. 1

    A new architecture for single-event upset detection & reconfiguration of SRAM-based FPGAs

    Get PDF
    Field Programmable Gate Arrays (FPGA) are used in a variety of applications, ranging from consumer electronics to devices in spacecrafts because of their flexibility in achieving requirements such as low cost, high performance, and fast turnaround. SRAM-based FPGAs can experience single bit flips in the configuration memory due to high-energy neutrons or alpha particles hitting critical nodes in the SRAM cells, by transferring enough energy to effect the change. High energy particles can be emitted by cosmic radiation or traces of radioactive elements in device packaging. The result of this could range from unwanted functional or data modification, data loss in the system, to damage to the cell where the charged particle makes impact. This phenomenon is known as a Single Event Upset (SEU) and makes fault tolerance a critical requirement in FPGA design. This research proposes a shift in architecture from current SRAM-based FPGAs such as Xilinx Virtex. The proposed architecture includes an inherent SEU detection through parity checking of the configuration memory. The inherent SEU detection sets a syndrome flag when an odd number of bit flips occur within a data frame of the configuration memory. To correct a fault, the FPGA the affected data frame is partially reconfigured. Existing and proposed solutions include: Triple Modular Redundancy (TMR) systems; readback and compare the configuration memory; and periodically reprogramming the entire configuration memory, also known as scrubbing. The advantages afforded by the proposed architecture over existing solutions include: faster error detection and correction latency over the readback method and better area and power overhead over TMR

    Embedded processors on FPGA: Hard-core vs Soft-core

    Get PDF
    Field Programmable Gate Arrays (FPGAs) are integrated circuits (ICs) that can be reprogrammed by the consumer after manufacturing. They are based on a matrix of configurable logic blocks connected via programmable interconnects that enables the designer to quickly recreate hardware circuits. In the past, FPGAs were primarily used for prototyping and debugging purposes. However, with their increased popularity, many commercial products now incorporate FPGAs. In the late 1990s, FPGA vendors introduced System-on-chip (SoC) devices that housed one or more hard-core processors and an FPGA fabric on a single IC to allow for more complex designs that involved hardware and software co-integration. While this approach provides advantages of running your design at much higher speeds it does not provide the flexibility of modification to suit the application. Because of this many FPGA vendors provide the solution of using soft-core processors that are configured from logic resources inside the FPGA. While this approach provides the advantage of flexibility they run at about 30% to 50% of the speed of the hard-core processors. Thus each approach has its own advantages and disadvantages. In this thesis, an application was developed to run on two different FPGA platforms. The first platform, Digilent Zybo FPGA board, houses an ARM-Cortex hard-core while the other, Digilent Nexys-4 board, implemented ARM-Cortex soft-core using FPGA resources. IP blocks were designed in Hardware Description Languages Verilog and VHDL to interface with the processor and it’s supported Bus Architecture (AXI/AHB). The application was written in C and assembly language and enacted the function of a Digital Oscilloscope. It used the ADC ports on the FPGA board to continuously read analog signals and plotted them as a dynamic waveform on a VGA monitor. Xilinx Vivado was the primary IDE used for HDL design, synthesis, simulation and implementation for both the platforms. Reports generated from Vivado as well as the run-time results were used to compare the two platforms and identify their strengths and weaknesses. Also discussed is the methodology for choosing either board over the other
    • …
    corecore