367 research outputs found

    E-QED: Electrical Bug Localization During Post-Silicon Validation Enabled by Quick Error Detection and Formal Methods

    Full text link
    During post-silicon validation, manufactured integrated circuits are extensively tested in actual system environments to detect design bugs. Bug localization involves identification of a bug trace (a sequence of inputs that activates and detects the bug) and a hardware design block where the bug is located. Existing bug localization practices during post-silicon validation are mostly manual and ad hoc, and, hence, extremely expensive and time consuming. This is particularly true for subtle electrical bugs caused by unexpected interactions between a design and its electrical state. We present E-QED, a new approach that automatically localizes electrical bugs during post-silicon validation. Our results on the OpenSPARC T2, an open-source 500-million-transistor multicore chip design, demonstrate the effectiveness and practicality of E-QED: starting with a failed post-silicon test, in a few hours (9 hours on average) we can automatically narrow the location of the bug to (the fan-in logic cone of) a handful of candidate flip-flops (18 flip-flops on average for a design with ~ 1 Million flip-flops) and also obtain the corresponding bug trace. The area impact of E-QED is ~2.5%. In contrast, deter-mining this same information might take weeks (or even months) of mostly manual work using traditional approaches

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Automated silicon debug data analysis techniques for a hardware data acquisition environment

    Full text link
    Abstract—Silicon debug poses a unique challenge to the en-gineer because of the limited access to internal signals of the chip. Embedded hardware such as trace buffers helps overcome this challenge by acquiring data in real time. However, trace buffers only provide access to a limited subset of pre-selected signals. In order to effectively debug, it is essential to configure the trace-buffer to trace the relevant signals selected from the pre-defined set. This can be a labor-intensive and time-consuming process. This paper introduces a set of techniques to automate the configuring process for trace buffer-based hardware. First, the proposed approach utilizes UNSAT cores to identify signals that can provide valuable information for localizing the error. Next, it finds alternatives for signals not part of the traceable set so that it can imply the corresponding values. Integrating the proposed techniques with a debugging methodology, experiments show that the methodology can reduce 30 % of potential suspects with as low as 8 % of registers traced, demonstrating the effectiveness of the proposed procedures. Index Terms—Silicon debug, post-silicon diagnosis, data acqui-sition setup I

    Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge.

    Full text link
    Today, design verification is by far the most resource and time-consuming activity of any new digital integrated circuit development. Within this area, the vast majority of the verification effort in industry relies on simulation platforms, which are implemented either in hardware or software. A "simulator" includes a model of each component of a design and has the capability of simulating its behavior under any input scenario provided by an engineer. Thus, simulators are deployed to evaluate the behavior of a design under as many input scenarios as possible and to identify and debug all incorrect functionality. Two features are critical in simulators for the validation effort to be effective: performance and checking/debugging capabilities. A wide range of simulator platforms are available today: on one end of the spectrum there are software-based simulators, providing a very rich software infrastructure for checking and debugging the design's functionality, but executing only at 1-10 simulation cycles per second (while actual chips operate at GHz speeds). At the other end of the spectrum, there are hardware-based platforms, such as accelerators, emulators and even prototype silicon chips, providing higher performances by 4 to 9 orders of magnitude, at the cost of very limited or non-existent checking/debugging capabilities. As a result, today, simulation-based validation is crippled: one can either have satisfactory performance on hardware-accelerated platforms or critical infrastructures for checking/debugging on software simulators, but not both. This dissertation brings together these two ends of the spectrum by presenting solutions that offer high-performance simulation with effective checking and debugging capabilities. Specifically, it addresses the performance challenge of software simulators by leveraging inexpensive off-the-shelf graphics processors as massively parallel execution substrates, and then exposing the parallelism inherent in the design model to that architecture. For hardware-based platforms, the dissertation provides solutions that offer enhanced checking and debugging capabilities by abstracting the relevant data to be logged during simulation so to minimize the cost of collection, transfer and processing. Altogether, the contribution of this dissertation has the potential to solve the challenge of digital design verification by enabling effective high-performance simulation-based validation.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99781/1/dchatt_1.pd

    Trace signal selection to enhance timing and logic visibility in post-silicon validation

    Full text link
    Abstract—Trace buffer technology allows tracking the values of a few number of state elements inside a chip within a desired time window, which is used to analyze logic errors during post-silicon validation. Due to limitation in the bandwidth of trace buffers, only few state elements can be selected for tracing. In this work we first propose two improvements to existing “signal selection ” algorithms to further increase the logic restorability inside the chip. In addition, we observe that different selections of trace signals can result in the same quality, measured as a logic visibility metric. Based on this observation, we propose a procedure which biases the selection to increase the restorability of a desired set of critical state elements, without sacrificing the (overall) logic visibility. We propose to select the critical state elements to increase the “timing visibility ” inside the chip to facilitate the debugging of timing errors which are perhaps the most challenging type of error to debug at the post-silicon stage. Specifically, we introduce a case when the critical state elements are selected to track the transient fluctuations in the power delivery network which can cause significant variations in the delays of the speedpaths in the circuit in nanometer technologies. This paper proposes to use the trace buffer technology to increase the timing visibility inside the chip, without sacrificing the logic visibility. I

    Protocol-directed trace signal selection for post-silicon validation

    Get PDF
    Due to the increasing complexity of modern digital designs using NoC (network-on-chip) communication, post-silicon validation has become an arduous task that consumes much of the development time of the product. The process of finding the root cause of bugs during post-silicon validation is very difficult because of the lack of observability of all signals on the chip. To increase observability for post-silicon validation, an effective silicon debug technique is to use an on-chip trace buffer to monitor and capture the circuit response of certain selected signals during its post-silicon operation. However, because of area limitations for debug structures on chip and routing concerns, the signals that are selected to be traced are a very small subset of all available signals. Traditionally, these trace signals were chosen manually by system designers who determined what signals may be needed for debug once the design reaches post-silicon. However, because modern digital designs have become very complex with many concurrent processes, this method is no longer reliable. Recent work has concentrated on automating the selection of low-level signals from a gate-level analysis. But none of them has ever been able to interpret the trace signals as high-level meaningful debugging information. In this work, we present an automated protocol-directed trace selection where the guiding force is the set of system-level protocols. We use a probabilistic formulation to select messages for tracing and then further analyze these solutions. This method produces traces that allow a debugger to observe when behavior has deviated from the correct path of execution and localize this incorrect behavior for further analysis. Most importantly, unlike the previous gate-level analysis based methods, this method can be applied during the chip design phase when most of the debug features are also designed. In addition, this method drastically reduces the time needed to select signals, as we automate a currently manual process

    Spectroscopy of ionizing radiation using methods of digital signal processing

    Get PDF
    Nuclear spectroscopy is an interdisciplinary subject of physics and electronics, which adopts state-of-the-art digital electronic technology and computer technology to analyze the information in ionizing radiation. The use of FPGAs shortens the development cycles of the digital circuit design and reduces system noise with compact electronics size. As a result, digital spectrometers with FPGAs are gaining popularity in research and industrial markets. The motivation behind this work was to replace conventional analog electronics with modern digital technology to provide an excellent energy resolution for different kinds of nuclear detectors and experiments. In this thesis, a SiPM-based scintillation detector is first designed based on the basic principles of ionizing radiation. The readout circuit of the detector is given in detail. Subsequently, a real-time DPP module is designed using the FPGA of Lattice. The system noise of the DPP is measured, compared, and analyzed after the hardware verification and implementation of digital algorithms to assess the capability of the DPP module. Afterward, digital pulse processing algorithms are investigated in detail to improve the performance of the designed digital module. The design and implementation of multipass moving average and trapezoidal filter are presented. The PZC and BLR are designed and implemented according to the analysis of the trapezoidal filter’s weakness to have a better energy resolution of the digital system. Algorithms are designed and implemented on a Simulink platform. Experimental results and analyses are provided at the end of this thesis. The acquired data are analyzed in real-time or by offline software. Spectra and resolutions are demonstrated of different detectors to evaluate the performance of digital module and algorithms implementation. The resolution of the scintillation detector can be obtained to 4.2%, which is almost the optimal value based on their datasheet. The implementations of digital algorithms are verified. Other applications are provided, such as coincidence and cosmic muons measurements

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Diseño CMOS de un sistema de visión “on-chip” para aplicaciones de muy alta velocidad

    Get PDF
    Falta palabras claveEsta Tesis presenta arquitecturas, circuitos y chips para el diseño de sensores de visión CMOS con procesamiento paralelo embebido. La Tesis reporta dos chips, en concreto: El chip Q-Eye; El chip Eye-RIS_VSoC.. Y dos sistemas de visión construidos con estos chips y otros sistemas “off-chip” adicionales, como FPGAs, en concreto: El sistema Eye-RIS_v1; El sistema Eye-RIS_v2. Estos chips y sistemas están concebidos para ejecutar tareas de visión a muy alta velocidad y con consumos de potencia moderados. Los sistemas resultantes son, además, compactos y por lo tanto ventajosos en términos del factor SWaP cuando se los compara con arquitecturas convencionales formadas por sensores de imágenes convencionales seguidos de procesadores digitales. La clave de estas ventajas en términos de SWaP y velocidad radica en el uso de sensores-procesadores, en lugar de meros sensores, en la interface de los sistemas de visión. Estos sensores-procesadores embeben procesadores programables de señal-mixta dentro del pixel y son capaces tanto de adquirir imágenes como de pre-procesarlas para extraer características, eliminar información redundante y reducir el número de datos que se transmiten fuera del sensor para su procesamiento ulterior. El núcleo de la tesis es el sensor-procesador Q-Eye, que se usa como interface en los sistemas Eye-RIS. Este sensor-procesador embebe una arquitectura de procesamiento formada por procesadores de señal-mixta distribuidos por pixel. Sus píxeles son por tanto estructuras multi-funcionales complejas. De hecho, son programables, incorporan memorias e interactúan con sus vecinos para realizar una variedad de operaciones, tales como: Convoluciones lineales con máscaras programables; Difusiones controladas por tiempo y nivel de señal, a través de un “grid” resistivo embebido en el plano focal; Aritmética de imágenes; Flujo de programación dependiente de la señal; Conversión entre los dominios de datos: imagen en escala de grises e imagen binaria; Operaciones lógicas en imágenes binarias; Operaciones morfológicas en imágenes binarias. etc. Con respecto a otros píxeles multi-función y sensores-procesadores anteriores, el Q-Eye reporta entre otras las siguientes ventajas: Mayor calidad de la imagen y mejores prestaciones de las funcionalidades embebidas en el chip; Mayor velocidad de operación y mejor gestión de la energía disponible; Mayor versatilidad para integración en sistemas de visión industrial. De hecho, los sistemas Eye-RIS son los primeros sistemas de visión industriales dotados de las siguientes características: Procesamiento paralelo distribuido y progresivo; Procesadores de señal-mixta fiables, robustos y con errores controlados; Programabilidad distribuida. La Tesis incluye descripciones detalladas de la arquitectura y los circuitos usados en el pixel del Q-Eye, del propio chip Q-Eye y de los sistemas de visión construidos en base a este chip. Se incluyen también ejemplos de los distintos chips en operaciónThis Thesis presents architectures, circuits and chips for the implementation of CMOS VISION SENSORS with embedded parallel processing. The Thesis reports two chips, namely: Q-eye chip; Eye-RIS_VSoC chip, and two vision systems realized by using these chips and some additional “off-chip” circuitry, such as FPGAs. These vision systems are: Eye-RIS_v1 system; Eye-RIS_v2 system. The chips and systems reported in the Thesis are conceived to perform vision tasks at very high speed and with moderate power consumption. The proposed vision systems are also compact and advantageous in terms of SWaP factors as compared with conventional architectures consisting of standard image sensor followed by digital processors. The key of these advantages in terms of SWaP and speed lies in the use of sensors-processors, rather than mere sensors, in the front-end interface of vision systems. These sensors-processors embed mixed-signal programmable processors inside the pixel. Therefore, they are able to acquire images and process them to extract the features, removing the redundant information and reducing the data throughput for later processing. The core of the Thesis is the sensor-processor Q-Eye, which is used as front-end in the Eye-RIS systems. This sensor-processor embeds a processing architecture composed by mixed-signal processors distributed per pixel. Then, its pixels are complex multi-functional structures. In fact, they are programmable, incorporate memories and interact with its neighbors in order to carry out a set of operations, including: Linear convolutions with programmable linear masks; Time- and signal-controlled diffusions (by means of an embedded resistive grid); Image arithmetic; Signal-dependent data scheduling; Gray-scale to binary transformation; Logic operation on binary images; Mathematical morphology on binary images, etc. As compared with previous multi-function pixels and sensors-processors, the Q-Eye brings among other the following advantages: Higher image quality and better performances of functionalities embedded on chip; Higher operation speed and better management of energy budget; More versatility for integration in industrial vision systems. In fact, the Eye-RIS systems are the first industrial vision systems equipped with the following characteristics: Parallel distributed and progressive processing; Reliable, robust mixed-signal processors with handled errors; Distributed programmability. This Thesis includes detailed descriptions of architecture and circuits used in the Q-Eye pixel, in the Q-Eye chip itself and in the vision systems developed based on this chip. Also, several examples of chips and systems in operation are presented
    corecore