2,343 research outputs found

    SoC Test: Trends and Recent Standards

    Get PDF
    The well-known approaching test cost crisis, where semiconductor test costs begin to approach or exceed manufacturing costs has led test engineers to apply new solutions to the problem of testing System-On-Chip (SoC) designs containing multiple IP (Intellectual Property) cores. While it is not yet possible to apply generic test architectures to an IP core within a SoC, the emergence of a number of similar approaches, and the release of new industry standards, such as IEEE 1500 and IEEE 1450.6, may begin to change this situation. This paper looks at these standards and at some techniques currently used by SoC test engineers. An extensive reference list is included, reflecting the purpose of this publication as a review paper

    FCSCAN: An Efficient Multiscan-based Test Compression Technique for Test Cost Reduction

    Get PDF

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Analysis and optimization of a debug post-silicon hardware architecture

    Get PDF
    The goal of this thesis is to analyze the post-silicon validation hardware infrastructure implemented on multicore systems taking as an example Esperanto Technologies SoC, which has thousands of RISC-V processors and targets specific software applications. Then, based on the conclusions of the analysis, the project proposes a new post-silicon debug architecture that can fit on any System on-Chip without depending on its target application or complexity and that optimizes the options available on the market for multicore systems

    REDUCING POWER DURING MANUFACTURING TEST USING DIFFERENT ARCHITECTURES

    Get PDF
    Power during manufacturing test can be several times higher than power consumption in functional mode. Excessive power during test can cause IR drop, over-heating, and early aging of the chips. In this dissertation, three different architectures have been introduced to reduce test power in general cases as well as in certain scenarios, including field test. In the first architecture, scan chains are divided into several segments. Every segment needs a control bit to enable capture in a segment when new faults are detectable on that segment for that pattern. Otherwise, the segment should be disabled to reduce capture power. We group the control bits together into one or more control chains. To address the extra pin(s) required to shift data into the control chain(s) and significant post processing in the first architecture, we explored a second architecture. The second architecture stitches the control bits into the chains they control as EECBs (embedded enable capture bits) in between the segments. This allows an ATPG software tool to automatically generate the appropriate EECB values for each pattern to maintain the fault coverage. This also works in the presence of an on-chip decompressor. The last architecture focuses primarily on the self-test of a device in a 3D stacked IC when an existing FPGA in the stack can be programmed as a tester. We show that the energy expended during test is significantly less than would be required using low power patterns fed by an on-chip decompressor for the same very short scan chains

    Integrating simultaneous bi-direction signalling in the test fabric of 3D stacked integrated circuits.

    Get PDF
    Jennions, Ian K. - Associate SupervisorThe world has seen significant advancements in electronic devices’ capabilities, most notably the ability to embed ultra-large-scale functionalities in lightweight, area and power-efficient devices. There has been an enormous push towards quality and reliability in consumer electronics that have become an indispensable part of human life. Consequently, the tests conducted on these devices at the final stages before these are shipped out to the customers have a very high significance in the research community. However, researchers have always struggled to find a balance between the test time (hence the test cost) and the test overheads; unfortunately, these two are inversely proportional. On the other hand, the ever-increasing demand for more powerful and compact devices is now facing a new challenge. Historically, with the advancements in manufacturing technology, electronic devices witnessed miniaturizing at an exponential pace, as predicted by Moore’s law. However, further geometric or effective 2D scaling seems complicated due to performance and power concerns with smaller technology nodes. One promising way forward is by forming 3D Stacked Integrated Circuits (SICs), in which the individual dies are stacked vertically and interconnected using Through Silicon Vias (TSVs) before being packaged as a single chip. This allows more functionality to be embedded with a reduced footprint and addresses another critical problem being observed in 2D designs: increasingly long interconnects and latency issues. However, as more and more functionality is embedded into a small area, it becomes increasingly challenging to access the internal states (to observe or control) after the device is fabricated, which is essential for testing. This access is restricted by the limited number of Chip Terminals (IC pins and the vertical Through Silicon Vias) that a chip could be fitted with, the power consumption concerns, and the chip area overheads that could be allocated for testing. This research investigates Simultaneous Bi-Directional Signaling (SBS) for use in Test Access Mechanism (TAM) designs in 3D SICs. SBS enables chip terminals to simultaneously send and receive test vectors on a single Chip Terminal (CT), effectively doubling the per-pin efficiency, which could be translated into additional test channels for test time reduction or Chip Terminal reduction for resource efficiency. The research shows that SBS-based test access methods have significant potential in reducing test times and/or test resources compared to traditional approaches, thereby opening up new avenues towards cost-effectiveness and reliability of future electronics.PhD in Manufacturin

    A Multi-layer Fpga Framework Supporting Autonomous Runtime Partial Reconfiguration

    Get PDF
    Partial reconfiguration is a unique capability provided by several Field Programmable Gate Array (FPGA) vendors recently, which involves altering part of the programmed design within an SRAM-based FPGA at run-time. In this dissertation, a Multilayer Runtime Reconfiguration Architecture (MRRA) is developed, evaluated, and refined for Autonomous Runtime Partial Reconfiguration of FPGA devices. Under the proposed MRRA paradigm, FPGA configurations can be manipulated at runtime using on-chip resources. Operations are partitioned into Logic, Translation, and Reconfiguration layers along with a standardized set of Application Programming Interfaces (APIs). At each level, resource details are encapsulated and managed for efficiency and portability during operation. An MRRA mapping theory is developed to link the general logic function and area allocation information to the device related physical configuration level data by using mathematical data structure and physical constraints. In certain scenarios, configuration bit stream data can be read and modified directly for fast operations, relying on the use of similar logic functions and common interconnection resources for communication. A corresponding logic control flow is also developed to make the entire process autonomous. Several prototype MRRA systems are developed on a Xilinx Virtex II Pro platform. The Virtex II Pro on-chip PowerPC core and block RAM are employed to manage control operations while multiple physical interfaces establish and supplement autonomous reconfiguration capabilities. Area, speed and power optimization techniques are developed based on the developed Xilinx prototype. Evaluations and analysis of these prototype and techniques are performed on a number of benchmark and hashing algorithm case studies. The results indicate that based on a variety of test benches, up to 70% reduction in the resource utilization, up to 50% improvement in power consumption, and up to 10 times increase in run-time performance are achieved using the developed architecture and approaches compared with Xilinx baseline reconfiguration flow. Finally, a Genetic Algorithm (GA) for a FPGA fault tolerance case study is evaluated as a ultimate high-level application running on this architecture. It demonstrated that this is a hardware and software infrastructure that enables an FPGA to dynamically reconfigure itself efficiently under the control of a soft microprocessor core that is instantiated within the FPGA fabric. Such a system contributes to the observed benefits of intelligent control, fast reconfiguration, and low overhead
    • …
    corecore