14,831 research outputs found

    An On-line BIST RAM Architecture with Self Repair Capabilities

    Get PDF
    The emerging field of self-repair computing is expected to have a major impact on deployable systems for space missions and defense applications, where high reliability, availability, and serviceability are needed. In this context, RAM (random access memories) are among the most critical components. This paper proposes a built-in self-repair (BISR) approach for RAM cores. The proposed design, introducing minimal and technology-dependent overheads, can detect and repair a wide range of memory faults including: stuck-at, coupling, and address faults. The test and repair capabilities are used on-line, and are completely transparent to the external user, who can use the memory without any change in the memory-access protocol. Using a fault-injection environment that can emulate the occurrence of faults inside the module, the effectiveness of the proposed architecture in terms of both fault detection and repairing capability was verified. Memories of various sizes have been considered to evaluate the area-overhead introduced by this proposed architectur

    NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    Get PDF
    © 2016 Cheung, Schultz and Luk.NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation

    Trojans in Early Design Steps—An Emerging Threat

    Get PDF
    Hardware Trojans inserted by malicious foundries during integrated circuit manufacturing have received substantial attention in recent years. In this paper, we focus on a different type of hardware Trojan threats: attacks in the early steps of design process. We show that third-party intellectual property cores and CAD tools constitute realistic attack surfaces and that even system specification can be targeted by adversaries. We discuss the devastating damage potential of such attacks, the applicable countermeasures against them and their deficiencies

    Avoiding core's DUE & SDC via acoustic wave detectors and tailored error containment and recovery

    Get PDF
    The trend of downsizing transistors and operating voltage scaling has made the processor chip more sensitive against radiation phenomena making soft errors an important challenge. New reliability techniques for handling soft errors in the logic and memories that allow meeting the desired failures-in-time (FIT) target are key to keep harnessing the benefits of Moore's law. The failure to scale the soft error rate caused by particle strikes, may soon limit the total number of cores that one may have running at the same time. This paper proposes a light-weight and scalable architecture to eliminate silent data corruption errors (SDC) and detected unrecoverable errors (DUE) of a core. The architecture uses acoustic wave detectors for error detection. We propose to recover by confining the errors in the cache hierarchy, allowing us to deal with the relatively long detection latencies. Our results show that the proposed mechanism protects the whole core (logic, latches and memory arrays) incurring performance overhead as low as 0.60%. © 2014 IEEE.Peer ReviewedPostprint (author's final draft

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    A 96-Channel FPGA-based Time-to-Digital Converter

    Full text link
    We describe an FPGA-based, 96-channel, time-to-digital converter (TDC) intended for use with the Central Outer Tracker (COT) in the CDF Experiment at the Fermilab Tevatron. The COT system is digitized and read out by 315 TDC cards, each serving 96 wires of the chamber. The TDC is physically configured as a 9U VME card. The functionality is almost entirely programmed in firmware in two Altera Stratix FPGA's. The special capabilities of this device are the availability of 840 MHz LVDS inputs, multiple phase-locked clock modules, and abundant memory. The TDC system operates with an input resolution of 1.2 ns. Each input can accept up to 7 hits per collision. The time-to-digital conversion is done by first sampling each of the 96 inputs in 1.2-ns bins and filling a circular memory; the memory addresses of logical transitions (edges) in the input data are then translated into the time of arrival and width of the COT pulses. Memory pipelines with a depth of 5.5 ÎĽ\mus allow deadtime-less operation in the first-level trigger. The TDC VME interface allows a 64-bit Chain Block Transfer of multiple boards in a crate with transfer-rates up to 47 Mbytes/sec. The TDC also contains a separately-programmed data path that produces prompt trigger data every Tevatron crossing. The full TDC design and multi-card test results are described. The physical simplicity ensures low-maintenance; the functionality being in firmware allows reprogramming for other applications.Comment: 32 pages, 13 figure

    The Secrets of Massachusetts' Success: Why 97 Percent of State Residents Have Health Coverage

    Get PDF
    Analyzes the policies that helped expand coverage, including a data-driven eligibility system serving multiple programs, online applications via providers and community groups, and intensive public education. Considers national and state implications

    Oscillation-based DFT for Second-order Bandpass OTA-C Filters

    Get PDF
    This document is the Accepted Manuscript version. Under embargo until 6 September 2018. The final publication is available at Springer via https://doi.org/10.1007/s00034-017-0648-9.This paper describes a design for testability technique for second-order bandpass operational transconductance amplifier and capacitor filters using an oscillation-based test topology. The oscillation-based test structure is a vectorless output test strategy easily extendable to built-in self-test. The proposed methodology converts filter under test into a quadrature oscillator using very simple techniques and measures the output frequency. Using feedback loops with nonlinear block, the filter-to-oscillator conversion techniques easily convert the bandpass OTA-C filter into an oscillator. With a minimum number of extra components, the proposed scheme requires a negligible area overhead. The validity of the proposed method has been verified using comparison between faulty and fault-free simulation results of Tow-Thomas and KHN OTA-C filters. Simulation results in 0.25ÎĽm CMOS technology show that the proposed oscillation-based test strategy for OTA-C filters is suitable for catastrophic and parametric faults testing and also effective in detecting single and multiple faults with high fault coverage.Peer reviewedFinal Accepted Versio

    Efficient hardware debugging using parameterized FPGA reconfiguration

    Get PDF
    Functional errors and bugs inadvertently introduced at the RTL stage of the design process are responsible for the largest fraction of silicon IC re-spins. Thus, comprehensive func- tional verification is the key to reduce development costs and to deliver a product in time. The increasing demands for verification led to an increase in FPGA-based tools that perform emulation. These tools can run at much higher operating frequencies and achieve higher coverage than simulation. However, an important pitfall of the FPGA tools is that they suffer from limited internal signal observability, as only a small and preselected set of signals is guided towards (embedded) trace buffers and observed. This paper proposes a dynamically reconfigurable network of multiplexers that significantly enhance the visibility of internal signals. It allows the designer to dynamically change the small set of internal signals to be observed, virtually enlarging the set of observed signals significantly. These multiplexers occupy minimal space, as they are implemented by the FPGA’s routing infrastructure
    • …
    corecore