3,320 research outputs found

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Trace signal selection to enhance timing and logic visibility in post-silicon validation

    Full text link
    Abstract—Trace buffer technology allows tracking the values of a few number of state elements inside a chip within a desired time window, which is used to analyze logic errors during post-silicon validation. Due to limitation in the bandwidth of trace buffers, only few state elements can be selected for tracing. In this work we first propose two improvements to existing “signal selection ” algorithms to further increase the logic restorability inside the chip. In addition, we observe that different selections of trace signals can result in the same quality, measured as a logic visibility metric. Based on this observation, we propose a procedure which biases the selection to increase the restorability of a desired set of critical state elements, without sacrificing the (overall) logic visibility. We propose to select the critical state elements to increase the “timing visibility ” inside the chip to facilitate the debugging of timing errors which are perhaps the most challenging type of error to debug at the post-silicon stage. Specifically, we introduce a case when the critical state elements are selected to track the transient fluctuations in the power delivery network which can cause significant variations in the delays of the speedpaths in the circuit in nanometer technologies. This paper proposes to use the trace buffer technology to increase the timing visibility inside the chip, without sacrificing the logic visibility. I

    Protocol-directed trace signal selection for post-silicon validation

    Get PDF
    Due to the increasing complexity of modern digital designs using NoC (network-on-chip) communication, post-silicon validation has become an arduous task that consumes much of the development time of the product. The process of finding the root cause of bugs during post-silicon validation is very difficult because of the lack of observability of all signals on the chip. To increase observability for post-silicon validation, an effective silicon debug technique is to use an on-chip trace buffer to monitor and capture the circuit response of certain selected signals during its post-silicon operation. However, because of area limitations for debug structures on chip and routing concerns, the signals that are selected to be traced are a very small subset of all available signals. Traditionally, these trace signals were chosen manually by system designers who determined what signals may be needed for debug once the design reaches post-silicon. However, because modern digital designs have become very complex with many concurrent processes, this method is no longer reliable. Recent work has concentrated on automating the selection of low-level signals from a gate-level analysis. But none of them has ever been able to interpret the trace signals as high-level meaningful debugging information. In this work, we present an automated protocol-directed trace selection where the guiding force is the set of system-level protocols. We use a probabilistic formulation to select messages for tracing and then further analyze these solutions. This method produces traces that allow a debugger to observe when behavior has deviated from the correct path of execution and localize this incorrect behavior for further analysis. Most importantly, unlike the previous gate-level analysis based methods, this method can be applied during the chip design phase when most of the debug features are also designed. In addition, this method drastically reduces the time needed to select signals, as we automate a currently manual process

    Automated silicon debug data analysis techniques for a hardware data acquisition environment

    Full text link
    Abstract—Silicon debug poses a unique challenge to the en-gineer because of the limited access to internal signals of the chip. Embedded hardware such as trace buffers helps overcome this challenge by acquiring data in real time. However, trace buffers only provide access to a limited subset of pre-selected signals. In order to effectively debug, it is essential to configure the trace-buffer to trace the relevant signals selected from the pre-defined set. This can be a labor-intensive and time-consuming process. This paper introduces a set of techniques to automate the configuring process for trace buffer-based hardware. First, the proposed approach utilizes UNSAT cores to identify signals that can provide valuable information for localizing the error. Next, it finds alternatives for signals not part of the traceable set so that it can imply the corresponding values. Integrating the proposed techniques with a debugging methodology, experiments show that the methodology can reduce 30 % of potential suspects with as low as 8 % of registers traced, demonstrating the effectiveness of the proposed procedures. Index Terms—Silicon debug, post-silicon diagnosis, data acqui-sition setup I

    Modeling and model-aware signal processing methods for enhancement of optical systems

    Full text link
    Theoretical and numerical modeling of optical systems are increasingly being utilized in a wide range of areas in physics and engineering for characterizing and improving existing systems or developing new methods. This dissertation focuses on determining and improving the performance of imaging and non-imaging optical systems through modeling and developing model-aware enhancement methods. We evaluate the performance, demonstrate enhancements in terms of resolution and light collection efficiency, and improve the capabilities of the systems through changes to the system design and through post-processing techniques. We consider application areas in integrated circuit (IC) imaging for fault analysis and malicious circuitry detection, and free-form lens design for creating prescribed illumination patterns. The first part of this dissertation focuses on sub-surface imaging of ICs for fault analysis using a solid immersion lens (SIL) microscope. We first derive the Green's function of the microscope and use it to determine its resolution limits for bulk silicon and silicon-on-insulator (SOI) chips. We then propose an optimization framework for designing super-resolving apodization masks that utilizes the developed model and demonstrate the trade-offs in designing such masks. Finally, we derive the full electromagnetic model of the SIL microscope that models the image of an arbitrary sub-surface structure. With the rapidly shrinking dimensions of ICs, we are increasingly limited in resolving the features and identifying potential modifications despite the resolution improvements provided by the state-of-the-art microscopy techniques and enhancement methods described here. In the second part of this dissertation, we shift our focus away from improving the resolution and consider an optical framework that does not require high resolution imaging for detecting malicious circuitry. We develop a classification-based high-throughput gate identification method that utilizes the physical model of the optical system. We then propose a lower-throughput system to increase the detection accuracy, based on higher resolution imaging to supplement the former method. Finally, we consider the problem of free-form lens design for forming prescribed illumination patterns as a non-imaging application. Common methods that design free-form lenses for forming patterns consider the input light source to be a point source, however using extended light sources with such lenses lead to significant blurring in the resulting pattern. We propose a deconvolution-based framework that utilizes the lens geometry to model the blurring effects and eliminates this degradation, resulting in sharper patterns

    Modeling and model-aware signal processing methods for enhancement of optical systems

    Full text link
    Theoretical and numerical modeling of optical systems are increasingly being utilized in a wide range of areas in physics and engineering for characterizing and improving existing systems or developing new methods. This dissertation focuses on determining and improving the performance of imaging and non-imaging optical systems through modeling and developing model-aware enhancement methods. We evaluate the performance, demonstrate enhancements in terms of resolution and light collection efficiency, and improve the capabilities of the systems through changes to the system design and through post-processing techniques. We consider application areas in integrated circuit (IC) imaging for fault analysis and malicious circuitry detection, and free-form lens design for creating prescribed illumination patterns. The first part of this dissertation focuses on sub-surface imaging of ICs for fault analysis using a solid immersion lens (SIL) microscope. We first derive the Green's function of the microscope and use it to determine its resolution limits for bulk silicon and silicon-on-insulator (SOI) chips. We then propose an optimization framework for designing super-resolving apodization masks that utilizes the developed model and demonstrate the trade-offs in designing such masks. Finally, we derive the full electromagnetic model of the SIL microscope that models the image of an arbitrary sub-surface structure. With the rapidly shrinking dimensions of ICs, we are increasingly limited in resolving the features and identifying potential modifications despite the resolution improvements provided by the state-of-the-art microscopy techniques and enhancement methods described here. In the second part of this dissertation, we shift our focus away from improving the resolution and consider an optical framework that does not require high resolution imaging for detecting malicious circuitry. We develop a classification-based high-throughput gate identification method that utilizes the physical model of the optical system. We then propose a lower-throughput system to increase the detection accuracy, based on higher resolution imaging to supplement the former method. Finally, we consider the problem of free-form lens design for forming prescribed illumination patterns as a non-imaging application. Common methods that design free-form lenses for forming patterns consider the input light source to be a point source, however using extended light sources with such lenses lead to significant blurring in the resulting pattern. We propose a deconvolution-based framework that utilizes the lens geometry to model the blurring effects and eliminates this degradation, resulting in sharper patterns

    Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge.

    Full text link
    Today, design verification is by far the most resource and time-consuming activity of any new digital integrated circuit development. Within this area, the vast majority of the verification effort in industry relies on simulation platforms, which are implemented either in hardware or software. A "simulator" includes a model of each component of a design and has the capability of simulating its behavior under any input scenario provided by an engineer. Thus, simulators are deployed to evaluate the behavior of a design under as many input scenarios as possible and to identify and debug all incorrect functionality. Two features are critical in simulators for the validation effort to be effective: performance and checking/debugging capabilities. A wide range of simulator platforms are available today: on one end of the spectrum there are software-based simulators, providing a very rich software infrastructure for checking and debugging the design's functionality, but executing only at 1-10 simulation cycles per second (while actual chips operate at GHz speeds). At the other end of the spectrum, there are hardware-based platforms, such as accelerators, emulators and even prototype silicon chips, providing higher performances by 4 to 9 orders of magnitude, at the cost of very limited or non-existent checking/debugging capabilities. As a result, today, simulation-based validation is crippled: one can either have satisfactory performance on hardware-accelerated platforms or critical infrastructures for checking/debugging on software simulators, but not both. This dissertation brings together these two ends of the spectrum by presenting solutions that offer high-performance simulation with effective checking and debugging capabilities. Specifically, it addresses the performance challenge of software simulators by leveraging inexpensive off-the-shelf graphics processors as massively parallel execution substrates, and then exposing the parallelism inherent in the design model to that architecture. For hardware-based platforms, the dissertation provides solutions that offer enhanced checking and debugging capabilities by abstracting the relevant data to be logged during simulation so to minimize the cost of collection, transfer and processing. Altogether, the contribution of this dissertation has the potential to solve the challenge of digital design verification by enabling effective high-performance simulation-based validation.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99781/1/dchatt_1.pd

    Research and Technology

    Get PDF
    Langley Research Center is engaged in the basic an applied research necessary for the advancement of aeronautics and space flight, generating advanced concepts for the accomplishment of related national goals, and provding research advice, technological support, and assistance to other NASA installations, other government agencies, and industry. Highlights of major accomplishments and applications are presented

    Contributions to nanophotonics: linear, nonlinear and quantum phenomena

    Get PDF
    (English) Nanophotonics can be defined as the science and technology studying the control optical fields at the nanoscale and their interaction with matter. In order to spatially control such fields we would need structures with characteristic dimensions of the order of the wavelength, bringing us to the nanoscale. A way to control optical fields at this scale is the use of nanoantennas, optical equivalent of radio-antennas. They provide efficient interfaces between near-fields generated by light sources and radiative channels. After a brief Introduction, Chapter 2 describes interaction between single photon emitters and nanoantennas. We start the chapter introducing a method to numerically simulate the interaction. A key concept to solving Maxwell equations is that of the Green function. I show how this function relates to the emission rate of optical emitters in a nanophotonic environment. I then describe an our efforts to build a lifetime-imaging near-field scanning optical microscope. Using this rig we are able to measure changes changes in the emission rate of single emitters that interact with resonant optical antennas. A complementary way to control optical field in the nanoscale is using dielectric confinement. Chapter 3 introduces hybrid structures combining nanoantennas and dielectric waveguides. I generalize the Green function formalism introduced in Chapter 2, and show how this is related to the energy transfer rate between a donor and an acceptor. I use this numerical method to calculate the energy transfer rate in a hybrid structure. An increase of orders of magnitude is found at distances of the order of the wavelengths of the transferred photons. This chapter finishes by discussing the role that the local density of optical states has on the energy transfer efficiency. Nanoantennas increase near-field by orders of magnitude. In these conditions, nonlinear optical effects start to play a role. Chapter 4 is devoted to these nonlinear interactions mediated by nanoantennas. I explore nonlinear interactions in resonant nanoantennas, in particular SHG. First I introduce a method to numerically compute the contributions to SHG generated by the metal in nanoantennas. Both surface and bulk contributions to SHG are considered. I use the numerical method to show that narrowings within the antenna shape are sources of increased SHG. The increase in SHG is attributed to increase of the local field gradients, that increase to the bulk contribution to SHG. We numerically validate our results by performing SHG measurements at the single resonant antenna level. Optical fields are functions of space, but also of time. The development of broadband femtosecond lasers and pulse shaping techniques allows control of optical field down to the femtosecond timescale. Chapter 5 explores the control of optical fields in time. Using phase shaping methods we optimize the two-photon absorption process in single QDs. I introduce a new optimization algorithm, that allows us to perform the optimization using as feedback signal the luminesce from single QDs. We then compare our results with standard phase shaping techniques. Based on their success to effectively control all kinds of optical fields, plasmon supporting nanoantennas are being actively researched in the field of quantum optics. In Chapter 6 I describe a quantum eraser experiment mediated by structures supporting surface plasmon resonances. I first explain the details and subtleties of a quantum eraser experiment. I then detail our efforts to reproduce previously reported results about how to fabricate elliptical bullseye antennas behaving as quarter waveplates. Quarter waveplates are a required part for the quantum eraser effect to take place. An additional key component of our experiment is a bright, state-of-the-art entangle polarization entangle photon source that is described at length. We then perform a quantum eraser experiment mediated by plasmons.(Español) La nanofotónica es el conjunto de ciencia y tecnologías que estudian el control de campos ópticos en la nanoescala y la interacción de estos con la materia. Para controlar estos campos, necesitamos estructuras con dimensiones características del orden la su longitud de onda, lo que nos lleva a la nanoescala. Una forma de controlar campos ópticos a estas escalas es mediante el uso de nanoantenas, los equivalentes a frecuencias ópticas de las antenas de radio. Las nanoantenas proporcionan interfaces entre los campos cercanos generados por emisores ópticos y modos de radiación. Tras una breve introducción, el capítulo 2 describe la interacción entre emisores de fotones individuales y nanoantennas. El capitulo comienza introduciendo un método numérico de simulación que nos permite calcular la función de Green y su relación con la tasa de emisión de fotones de emisores ópticos en entornos nanofotónicos. Describo a continuación la construcción de un microscopio óptico de campo cercano capaz de medir el tiempo de vida de las tasas de emisión de emisores de fotones individuales que interactúan con nanoantenas. Un método complementario para controlar campos ópticos es la utilización del confinamiento dialéctico. El capítulo 3 introduce estructuras híbridas que combinan nanoantenas y guías de onda. Generalizo el formalismo de las funciones de Green del capitulo 1, y muestro como las nuevas funciones están relacionadas con la transferencia de energía entre un donor y un aceptor. Seguidamente, calculo la tasa de transferencia de fotones mediada por la estructura híbrida. Observamos un incremento de ordenes de magnitud en la tasa de transferencia a distancias comparables con las longitudes de onda de los fotones transmitidos. El capitulo finaliza discutiendo el papel que la densidad local de estado ópticos juega en la eficiencia de la transferencia de energía. Las nanoantenas incrementan el campo cercano órdernes de magnitud. En estas condiciones los efectos no-linearles comienzan a entrar en juego. El capitulo 4 está dedicado a estas interacciones no lineales mediadas por nanoantenas, en particular la generación de segundo armónico (SHG). Primeramente, introduzco un método numérico para calcular las contribuciones superficiales y volumétricas a SHG. Estrecheces introducidas a lo largo de las nanoantenas incrementan las emisiones de SHG. Este incremento es atribuido al incremento de gradientes de campo, que contribuyen mayoritariamente a un incremento de la parte volumétrica. Finalmente validamos nuestros resultados numérico experimentalmente. Los campos ópticos son funciones del espacio, pero también del tiempo. El desarrollo de láseres de femtosegundo de banda ancha, unido a las técnicas de formación de pulsos permiten el control de la luz a escalada de femtosegundos. El capítulo 5 explora este control de los campos en el tiempo. Utilizando técnicas de formación de pulsos optimizamos los procesos de absorción de dos fotones en puntos cuánticos de semiconductores. Introduzco un nuevo algoritmo de optimización que nos permite utilizar como señal de retroalimentación la señal de luminiscencia de puntos cuánticos individuales. Debido al éxito en el control de todo tipo de campos ópticos, las nanoantenas basadas en resonancias de plasmones están siendo activamente investigadas en el campo de la óptica cuántica. En el capitulo 6 describo un experimento de borrado cuántico mediado por estructuras basadas en resonancias plasmónicas. Primeramente describo los detalles y sutilezas de este tipo de experimentos. Seguidamente detallo nuestros esfuerzos para reproducir resultados previos acerca de la fabricación antenas elípticas de diana que se comportan como retardados de cuarto de onda. Estos retardadores de cuarto de onda son necesarios para que el efecto de borrado cuántico pueda darse. Otro ingrediente clave de nuestro experimento es una fuente brillante de fotones ...Postprint (published version

    Frontal cortex selects representations of the talker’s mouth to aid in speech perception

    Get PDF
    corecore