75 research outputs found

    A generic debug interface for IP-integrated assertions

    Get PDF
    Der Entwurf von Hardware/Software Systemen ist auf eine solide Verifikationsmethodik angewiesen, die den ganzen Design Flow durchzieht. Viele Konzepte haben eine Erhöhung des Abstraktionsniveaus bei der Entwurfseingabe gemeinsam, wobei der modell-basierte Hardware-Entwurf einen vielversprechenden und sich verbreitenenden Ansatz darstellt. Assertion basierte Verifikation ermöglicht dem Entwickler die Spezifikation von Eigenschaften des Entwurfes und die Aufdeckung von Fällen, in denen diese verletzt werden. Während Assertions in Entwurfs- und Simulationsstadien weit verbreitet sind, ist der Ansatz, diese mit auf dem integrierten Schaltkreis (IC) zu fertigen, neuartig. In dieser Diplomarbeit soll ein von Infineon Technologies entwickeltes, auf UML basierendes Datenmodell, welches zur Erfassung von Entwurfsspezifikation und zur automatischen Code-Generierung genutzt wird dahingehend erweitert werden, die Beschreibung für im IC integrierte Assertions zu ermöglichen. Für diese Zwecke wird ein abstraktes Datenmodell beschrieben werden. Das Assertion Interface soll die spezifikationsgetreue Modellintegration gewährleisten, sowie IC interne Assertionresultate dem umgebenen System über das Interface zugänglich machen und damit zum Debugging während der Laufzeit ermöglichen. Ferner werden die Codegenerierungs Templates erläutert und einBeispielsystem eingeführt, um die beschriebenden Konzepte zu validieren.Nowadays electronic systems design requires fast time to market and solid verification throughout the entire design flow. Many concepts have been researched to raise the level of abstraction during the design entry phase, whereas model-based design is the most promising one. Assertion-based verification enables the developer to specify properties of the design and to get report if these are violated. Assertions are common during development and simulation of electronic products but often are not included in the final silicon. In this thesis an UML-based model defined at Infineon Technologies for capturing design specification information and to generate code automatically using templates, will be extended to allow the description of an abstract debuggable assertion interface for silicon assertions. With help of the assertion interface it shall be possible to verify the correct module integration and to monitor IP-internal assertion checker results. Besides, the code-generation templates for the assertion interface model will be described. To demonstrate the usability of the developed concepts an example system will be introduced to validate the approach.Ilmenau, Techn. Univ., Diplomarbeit, 200

    Achieving Functional Correctness in Large Interconnect Systems.

    Full text link
    In today's semi-conductor industry, large chip-multiprocessors and systems-on-chip are being developed, integrating a large number of components on a single chip. The sheer size of these designs and the intricacy of the communication patterns they exhibit have propelled the development of network-on-chip (NoC) interconnects as the basis for the communication infrastructure in these systems. Faced with the interconnect's growing size and complexity, several challenges hinder its effective validation. During the interconnect's development, the functional verification process relies heavily on the use of emulation and post-silicon validation platforms. However, detecting and debugging errors on these platforms is a difficult endeavour due to the limited observability, and in turn the low verification capabilities, they provide. Additionally, with the inherent incompleteness of design-time validation efforts, the potential of design bugs escaping into the interconnect of a released product is also a concern, as these bugs can threaten the viability of the entire system. This dissertation provides solutions to enable the development of functionally correct interconnect designs. We first address the challenges encountered during design-time verification efforts, by providing two complementary mechanisms that allow emulation and post-silicon verification frameworks to capture a detailed overview of the functional behaviour of the interconnect. Our first solution re-purposes the contents of in-flight traffic to log debug data from the interconnect's execution. This approach enables the validation of the interconnect using synthetic traffic workloads, while attaining over 80% observability of the routes followed by packets and capturing valuable debugging information. We also develop an alternative mechanism that boosts observability by taking periodic snapshots of execution, thus extending the verification capabilities to run both synthetic traffic and real-application workloads. The collected snapshots enhance detection and debugging support, and they provide observability of over 50% of packets and reconstructs at least half of each of their routes. Moreover, we also develop error detection and recovery solutions to address the threat of design bugs escaping into the interconnect's runtime operation. Our runtime techniques can overcome communication errors without needing to store replicate copies of all in-flight packets, thereby achieving correctness at minimal area costsPhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116741/1/rawanak_1.pd

    Reining in the Functional Verification of Complex Processor Designs with Automation, Prioritization, and Approximation

    Full text link
    Our quest for faster and efficient computing devices has led us to processor designs with enormous complexity. As a result, functional verification, which is the process of ascertaining the correctness of a processor design, takes up a lion's share of the time and cost spent on making processors. Unfortunately, functional verification is only a best-effort process that cannot completely guarantee the correctness of a design, often resulting in defective products that may have devastating consequences.Functional verification, as practiced today, is unable to cope with the complexity of current and future processor designs. In this dissertation, we identify extensive automation as the essential step towards scalable functional verification of complex processor designs. Moreover, recognizing that a complete guarantee of design correctness is impossible, we argue for systematic prioritization and prudent approximation to realize fast and far-reaching functional verification solutions. We partition the functional verification effort into three major activities: planning and test generation, test execution and bug detection, and bug diagnosis. Employing a perspective we refer to as the automation, prioritization, and approximation (APA) approach, we develop solutions that tackle challenges across these three major activities. In pursuit of efficient planning and test generation for modern systems-on-chips, we develop an automated process for identifying high-priority design aspects for verification. In addition, we enable the creation of compact test programs, which, in our experiments, were up to 11 times smaller than what would otherwise be available at the beginning of the verification effort. To tackle challenges in test execution and bug detection, we develop a group of solutions that enable the deployment of automatic and robust mechanisms for catching design flaws during high-speed functional verification. By trading accuracy for speed, these solutions allow us to unleash functional verification platforms that are over three orders of magnitude faster than traditional platforms, unearthing design flaws that are otherwise impossible to reach. Finally, we address challenges in bug diagnosis through a solution that fully automates the process of pinpointing flawed design components after detecting an error. Our solution, which identifies flawed design units with over 70% accuracy, eliminates weeks of diagnosis effort for every detected error.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137057/1/birukw_1.pd

    Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge.

    Full text link
    Today, design verification is by far the most resource and time-consuming activity of any new digital integrated circuit development. Within this area, the vast majority of the verification effort in industry relies on simulation platforms, which are implemented either in hardware or software. A "simulator" includes a model of each component of a design and has the capability of simulating its behavior under any input scenario provided by an engineer. Thus, simulators are deployed to evaluate the behavior of a design under as many input scenarios as possible and to identify and debug all incorrect functionality. Two features are critical in simulators for the validation effort to be effective: performance and checking/debugging capabilities. A wide range of simulator platforms are available today: on one end of the spectrum there are software-based simulators, providing a very rich software infrastructure for checking and debugging the design's functionality, but executing only at 1-10 simulation cycles per second (while actual chips operate at GHz speeds). At the other end of the spectrum, there are hardware-based platforms, such as accelerators, emulators and even prototype silicon chips, providing higher performances by 4 to 9 orders of magnitude, at the cost of very limited or non-existent checking/debugging capabilities. As a result, today, simulation-based validation is crippled: one can either have satisfactory performance on hardware-accelerated platforms or critical infrastructures for checking/debugging on software simulators, but not both. This dissertation brings together these two ends of the spectrum by presenting solutions that offer high-performance simulation with effective checking and debugging capabilities. Specifically, it addresses the performance challenge of software simulators by leveraging inexpensive off-the-shelf graphics processors as massively parallel execution substrates, and then exposing the parallelism inherent in the design model to that architecture. For hardware-based platforms, the dissertation provides solutions that offer enhanced checking and debugging capabilities by abstracting the relevant data to be logged during simulation so to minimize the cost of collection, transfer and processing. Altogether, the contribution of this dissertation has the potential to solve the challenge of digital design verification by enabling effective high-performance simulation-based validation.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99781/1/dchatt_1.pd

    A study on Machine Learning-based Hardware Bug Localization

    Get PDF
    Simulation-based verification is a very essential technique in ensuring the correct functionality of any digital integrated circuit design before it goes on silicon. One of the major challenges of running simulation-based verification on complex designs is the tradeoff between simulation time and the time taken for failure localization or to root cause. This is because the simulation run times could be very high when there are many checkers used per cycle of execution. However, when lesser checkers are turned on, the amount of time for manual debug increases because, after failure, the verification engineer has to manually analyze the failure and turn on the more granular checkers individually and re-simulate; or invest lots of time, memory and resources to manually go through the simulation cycles dumps before the failure which is not good given the current complexity of designs. Machine learning has emerged to be a popular technique to construct mathematical models that can understand the expected patterns from a given dataset. To address the aforementioned trade-off problem, an idea is investigated to use the failing signatures from fewer active high-level checkers during simulation to train a machine learning model to predict the location of the bug in the design. This information would in turn be used to turn on relevant checkers in the design before re-simulation. Other methods to analyze the signals in design after failure to predict bug location were also studied. This idea is implemented and tested on a MIPS processor with total of ~ 700 bugs injected in 15 different units to distinguish them with good accuracy

    DiAMOND:Distributed Alteration of Messages for On-Chip Network Debug

    Get PDF
    Abstract-During emulation and post-silicon validation of networks-on-chip (NoCs), lack of observability of internal operations hinders the detection and debugging of functional bugs. Verifying the correctness of the control-flow portion of the NoC requires tests that exercise its functionality, while abstracting the data content of traffic. We propose a methodology where network packets are repurposed for the storage of debug information collected during execution. Debug data pertaining to each packet is collected at routers along its path and stored by replacing the packet's original data content. Our solution is coupled with a detection scheme consisting of small checkers that monitor execution and flag bugs. Upon bug detection, we analyze the debug information to reconstruct network traffic. We also provide relevant statistics for debugging, such as packet interactions and packet latencies, per router. In our experiments, this approach allows us to reconstruct over 80% of the packets' routes. Moreover, the obtained statistics facilitate debugging erroneous network behavior and identifying performance bottlenecks

    FPGA-Assisted Assertion-Based Verification Platform

    Get PDF
    In this paper, field programmable gate array (FPGA)-assisted verification platform is devised to enhance the assertion-based verification methodology to address the issues of high demand of integrated circuit with the advanced features to be delivered to market within tight Time-To-Market. The concept of SystemVerilog Assertion (SVA) checker generator is introduced to translate non-synthesizable verification coding into hardware so-called assertion checker in Verilog. A lookup table, which comprises of SVA operators mapped to their corresponding synthesizable Verilog coding was developed to generate assertion checker, which produces a single bit 1 when the assertion fails. Collection module implemented using a memory block and an arbiter was devised to be simple and fast enough to collect assertion results from the assertion checker. Since assertion checker can produce assertion result at any time, an arbiter is required to act as an interface between assertion checker and collection module. Case studies have been conducted on the proof-of-concept designs, which are the firstin-first-out (FIFO), up-down counter and Context Adaptive Variable Length Coding (CAVLC) to evaluate the effectiveness of the proposed FPGA-assisted verification platform. In the case studies, we have shown that the proposed FPGA-assisted verification platform works correctly. Besides, we also evaluated the method in area utilizations (ALMs). It has been proven that simulation-based verification time can be reduced for as much as 50% for complexity of VLSI design. Thus, implementing assertions using hardware such as FPGA becomes a solution to alleviate issue of long simulation time

    A Test Vector Minimization Algorithm Based On Delta Debugging For Post-Silicon Validation Of Pcie Rootport

    Get PDF
    In silicon hardware design, such as designing PCIe devices, design verification is an essential part of the design process, whereby the devices are subjected to a series of tests that verify the functionality. However, manual debugging is still widely used in post-silicon validation and is a major bottleneck in the validation process. The reason is a large number of tests vectors have to be analyzed, and this slows process down. To solve the problem, a test vector minimizer algorithm is proposed to eliminate redundant test vectors that do not contribute to reproduction of a test failure, hence, improving the debug throughput. The proposed methodology is inspired by the Delta Debugging algorithm which is has been used in automated software debugging but not in post-silicon hardware debugging. The minimizer operates on the principle of binary partitioning of the test vectors, and iteratively testing each subset (or complement of set) on a post-silicon System-Under-Test (SUT), to identify and eliminate redundant test vectors. Test results using test vector sets containing deliberately introduced erroneous test vectors show that the minimizer is able to isolate the erroneous test vectors. In test cases containing up to 10,000 test vectors, the minimizer requires about 16ns per test vector in the test case when only one erroneous test vector is present. In a test case with 1000 vectors including erroneous vectors, the same minimizer requires about 140μs per erroneous test vector that is injected. Thus, the minimizer’s CPU consumption is significantly smaller than the typical amount of time of a test running on SUT. The factors that significantly impact the performance of the algorithm are number of erroneous test vectors and distribution (spacing) of the erroneous vectors. The effect of total number of test vectors and position of the erroneous vectors are relatively minor compared to the other two. The minimization algorithm therefore was most effective for cases where there are only a few erroneous test vectors, with large number of test vectors in the set

    Securing IEEE P1687 On-chip Instrumentation Access Using PUF

    Get PDF
    As the complexity of VLSI designs grows, the amount of embedded instrumentation in system-on-a-chip designs increases at an exponential rate. Such structures serve various purposes throughout the life-cycle of VLSI circuits, e.g. in post-silicon validation and debug, production test and diagnosis, as well as during in-field test and maintenance. Reliable access mechanisms for embedded instruments are therefore key to rapid chip development and secure system maintenance. Reconfigurable scan networks defined by IEEE Std. P1687 emerge as a scalable and cost-effective access medium for on-chip instrumentation. The accessibility offered by reconfigurable scan networks contradicts security and safety requirements for embedded instrumentation. Embedded instrumentation is an integral system component that remains functional throughout the lifetime of a chip. To prevent harmful activities, such as tampering with safety-critical systems, and reduce the risk of intellectual property infringement, the access to embedded instrumentation requires protection. This thesis provides a novel, Physical Unclonable Function (PUF) based secure access method for on-chip instruments which enhances the security of IJTAG network at low hardware cost and with less routing congestion
    corecore