815 research outputs found

    A Test Vector Minimization Algorithm Based On Delta Debugging For Post-Silicon Validation Of Pcie Rootport

    Get PDF
    In silicon hardware design, such as designing PCIe devices, design verification is an essential part of the design process, whereby the devices are subjected to a series of tests that verify the functionality. However, manual debugging is still widely used in post-silicon validation and is a major bottleneck in the validation process. The reason is a large number of tests vectors have to be analyzed, and this slows process down. To solve the problem, a test vector minimizer algorithm is proposed to eliminate redundant test vectors that do not contribute to reproduction of a test failure, hence, improving the debug throughput. The proposed methodology is inspired by the Delta Debugging algorithm which is has been used in automated software debugging but not in post-silicon hardware debugging. The minimizer operates on the principle of binary partitioning of the test vectors, and iteratively testing each subset (or complement of set) on a post-silicon System-Under-Test (SUT), to identify and eliminate redundant test vectors. Test results using test vector sets containing deliberately introduced erroneous test vectors show that the minimizer is able to isolate the erroneous test vectors. In test cases containing up to 10,000 test vectors, the minimizer requires about 16ns per test vector in the test case when only one erroneous test vector is present. In a test case with 1000 vectors including erroneous vectors, the same minimizer requires about 140μs per erroneous test vector that is injected. Thus, the minimizer’s CPU consumption is significantly smaller than the typical amount of time of a test running on SUT. The factors that significantly impact the performance of the algorithm are number of erroneous test vectors and distribution (spacing) of the erroneous vectors. The effect of total number of test vectors and position of the erroneous vectors are relatively minor compared to the other two. The minimization algorithm therefore was most effective for cases where there are only a few erroneous test vectors, with large number of test vectors in the set

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Automating Logic Transformations With Approximate SPFDs

    Full text link

    Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge.

    Full text link
    Today, design verification is by far the most resource and time-consuming activity of any new digital integrated circuit development. Within this area, the vast majority of the verification effort in industry relies on simulation platforms, which are implemented either in hardware or software. A "simulator" includes a model of each component of a design and has the capability of simulating its behavior under any input scenario provided by an engineer. Thus, simulators are deployed to evaluate the behavior of a design under as many input scenarios as possible and to identify and debug all incorrect functionality. Two features are critical in simulators for the validation effort to be effective: performance and checking/debugging capabilities. A wide range of simulator platforms are available today: on one end of the spectrum there are software-based simulators, providing a very rich software infrastructure for checking and debugging the design's functionality, but executing only at 1-10 simulation cycles per second (while actual chips operate at GHz speeds). At the other end of the spectrum, there are hardware-based platforms, such as accelerators, emulators and even prototype silicon chips, providing higher performances by 4 to 9 orders of magnitude, at the cost of very limited or non-existent checking/debugging capabilities. As a result, today, simulation-based validation is crippled: one can either have satisfactory performance on hardware-accelerated platforms or critical infrastructures for checking/debugging on software simulators, but not both. This dissertation brings together these two ends of the spectrum by presenting solutions that offer high-performance simulation with effective checking and debugging capabilities. Specifically, it addresses the performance challenge of software simulators by leveraging inexpensive off-the-shelf graphics processors as massively parallel execution substrates, and then exposing the parallelism inherent in the design model to that architecture. For hardware-based platforms, the dissertation provides solutions that offer enhanced checking and debugging capabilities by abstracting the relevant data to be logged during simulation so to minimize the cost of collection, transfer and processing. Altogether, the contribution of this dissertation has the potential to solve the challenge of digital design verification by enabling effective high-performance simulation-based validation.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99781/1/dchatt_1.pd

    Quantum Algorithm for Variant Maximum Satisfiability

    Get PDF
    In this paper, we proposed a novel quantum algorithm for the maximum satisfiability problem. Satisfiability (SAT) is to find the set of assignment values of input variables for the given Boolean function that evaluates this function as TRUE or prove that such satisfying values do not exist. For a POS SAT problem, we proposed a novel quantum algorithm for the maximum satisfiability (MAX-SAT), which returns the maximum number of OR terms that are satisfied for the SAT-unsatisfiable function, providing us with information on how far the given Boolean function is from the SAT satisfaction. We used Grover’s algorithm with a new block called quantum counter in the oracle circuit. The proposed circuit can be adapted for various forms of satisfiability expressions and several satisfiability-like problems. Using the quantum counter and mirrors for SAT terms reduces the need for ancilla qubits and realizes a large Toffoli gate that is then not needed. Our circuit reduces the number of ancilla qubits for the terms T of the Boolean function from T of ancilla qubits to ≈⌈log2⁡T⌉+1. We analyzed and compared the quantum cost of the traditional oracle design with our design which gives a low quantum cost

    Hybrid Verification for Analog and Mixed-signal Circuits

    Get PDF
    With increasing design complexity and reliability requirements, analog and mixedsignal (AMS) verification manifests itself as a key bottleneck. While formal methods and machine learning have been proposed for AMS verification, these two types of techniques suffer from their own limitations, with the former being specifically limited by scalability and the latter by inherent errors in learning-based models. We present a new direction in AMS verification by proposing a hybrid formal/machinelearning- based verification technique (HFMV) to combine the best of the two worlds. HFMV builds formalism on the top of a machine learning model to verify AMS circuits efficiently while meeting a user-specified confidence level. Guided by formal checks, HFMV intelligently explores the high-dimensional parameter space of a given design by iteratively improving the machine learning model. As a result, it leads to accurate failure prediction in the case of a failing circuit or a reliable pass decision in the case of a good circuit. Our experimental results demonstrate that the proposed HFMV approach is capable of identifying hard-to-find failures which are completely missed by a huge number of random simulation samples while significantly cutting down training sample size and verification cycle time

    혼성 신호 시스템에서의 확률적 검증과 디버깅 자동화

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 김재하.Increasing system complexity, growing uncertainty in semiconductor technology, and demanding requirements in complex specifications pose significant challenges to both pre-silicon design verification and post-silicon chip validation. Thus, this dissertation investigates efficient pre-silicon/post-silicon validation and debugging methodology, especially for analog and mixed-signal (AMS) systems. Principally, validation is formulated as a Bayesian inference problem and analyzed in a probabilistic manner. For instance, pass/fail property can be checked by Bayesian sampling – the posterior distribution of the unknown failure probability can be measured after many sample validation trials so as to quantify the confidence of pass with a given tolerance and model accuracy. This approach is first taken in the pre-silicon verification to check a systems property. In other words, the efficient Monte Carlo-based methods for ensuring global convergence property are proposed using two techniques: fast sample batch verification using cluster analysis and efficient sampling using Gaussian process regression. In addition, a practical design flow for preventing global convergence failure is presented – the notion of indeterminate state X is extended to AMS systems. For the post-silicon validation, in particular, the probabilistic graphical model is proposed as one effective abstraction of AMS systems. Using the probabilistic graphical model and statistical inference, we can compute the probability of each parameter to satisfy a given specification and use it for bug localization and ranking. The proposed model and method are especially useful at the post-silicon validation phase, since they can check and localize bugs in the system under limited observability and controllability.Contents Abstract Contents List of Tables List of Figures 1 Introduction 2 Probabilistic Validation and Computer-Aided Debugging in AMS Systems 2.1 Validation as Inference 2.2 Bayesian Property Checking by Sampling 2.3 Probabilistic Graphical Models 3 Global Convergence Property Checking withMonte CarloMethods in Pre-Silicon Validation 3.1 Problem Formulation 3.2 Fast Sample Batch Verification using Cluster Analysis 3.2.1 Global convergence failures in state space models 3.2.2 Finding global convergence failures by cluster-split detection 3.2.3 Experimental results 3.3 Efficient Covering and Sampling of Parameter Space 3.3.1 Attempt to cover the parameter space – finding transient regions in circuits state space 3.3.2 Rare-event failure simulation using Gaussian process 3.4 Preventing Global Convergence Failure via Indeterminate State X Elimination 3.4.1 Preventing start-up failure by eliminating all indeterminate states 3.4.2 Procedure of eliminating indeterminate states with the extended X for AMS systems 3.4.3 Reducing reset circuits in the X elimination procedure 3.4.4 Experimental results 4 Bug Localization using Probabilistic GraphicalModels in Post-Silicon Validation 4.1 Problem Formulation 4.2 Modeling of AMS Circuits using Probabilistic Graphical Models 4.2.1 Probabilistic graphical models 4.2.2 Generating probabilistic graphical models for AMS circuits 4.3 Probabilistic Bug Localization using Probabilistic Graphical Models 4.3.1 Posterior estimation using statistical inference 4.3.2 Probabilistic bug localization and ranking 4.3.3 Implementation details 4.4 Experimental Results 4.5 Possible Extensions of Graphical Models – Equivalence Checking 5 Conclusion BibliographyDocto

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process
    corecore