5 research outputs found

    Coverage measurement for software application level verification using symbolic trajectory evaluation techniques

    Get PDF
    Copyright © 2004 IEEEDesign verification of a systems-on-a-chip is a bottleneck for hardware design projects. A new solution is a design verification methodology that applies coverage driven verification at the embedded software application level. This methodology currently lacks an appropriate coverage measurement technique. This paper proposes a new coverage model for the software application level. Using this coverage model, a novel technique to represent and measure coverage is described. This technique uses ideas such as control graph structures and checking algorithms to estimate the completeness of software application verification.Adriel Cheng, Atanas Parashkevov, Cheng-Chew Li

    Determining Cases of Scenarios to Improve Coverage in Simulation-based Verification

    Full text link
    Abstract—Functional verification of complex designs is still dominated by simulation-based approaches. In particular, Coverage-driven Verification (CDV) is well acknowledged and ap-plied in industry. Here, verification gaps in terms of inadequately checked scenarios are addressed and closed by generating and applying dedicated stimuli. In order to ensure a good coverage and, by this, a high verification quality, each scenario is supposed to become sufficiently triggered. However, the considered scenario may be triggered in several fashions and information about that is hardly available in the existing CDV approaches. In this work, we propose an approach which automatically derives this information. Examples and experimental evaluations illustrate how this improves coverage in simulation-based verification. I

    Data Learning Methodologies for Improving the Efficiency of Constrained Random Verification

    Get PDF
    Functional verification continues to be one of the most time-consuming steps in the chip design cycle. Simulation-based verification is well practised in industry thanks to its flexibility and scalability. The completeness of the verification is measured by coverage metrics. Generating effective tests to achieve a satisfactory coverage level is a difficult task in verification. Constrained random verification is commonly used to alleviate the manual efforts for producing direct tests. However, there are yet many situations where unnecessary verification efforts in terms of simulation cycles and man hours are spent. Also, it is observed that lots of data generated in existing constrained random verification process are barely analysed, and then discarded after simplistic correctness checking. Based on our previous research on data mining and exposure to the industrial verification process, we identify that there are opportunities in extracting knowledge from the constrained random verification data and use it to improve the verification efficiency.In constrained random verification, when a simulation run of tests instantiated by a test template cannot reach the coverage goal, there are two possible reasons: insufficient simulation, and improper constraints and/or biases. There are three actions that a verification engineer can usually do to address the problem: to simulate more tests, to refine the test template, or to change to a new test template. Accordingly, we propose three data learning methodologies to help the engineers make more informed decisions in these three application scenarios and thus improve the verification efficiency.The first methodology identifies important ("novel") tests before simulation based on what have been already simulated. By only simulating those novel tests and filtering out redundant tests, tremendous resources such as simulation cycles and licenses can be saved. The second methodology extracts the unique properties from those novel tests identified in simulation and uses them to refine the test template. By leveraging the extracted knowledge, more tests similar to the novel ones are generated. And thus the new tests are more likely to activate coverage events that are otherwise difficult to hit by extensive simulation. The third methodology analyses a collection of existing test items (test templates) and identifies feasible augmentation to the test plan. By automatically adding new test items based on the data analysis, it alleviates the manual efforts for closing coverage holes.The proposed data learning methodologies were developed and applied in the setting of verifying commercial microprocessor and SoC platform designs. The experiments in this dissertation were conducted in the verification environment of a commercial microprocessor and a SoC platform in Freescale Semiconductor Inc. and were in parallel with the on-going verification efforts. The experiment results demonstrate the feasibility and effectiveness of building learning frameworks to improve verification efficiency

    Application of Bayesian Networks to Coverage Directed Test Generation for the Verification of Digital Hardware Designs

    Get PDF
    Functional verification is generally regarded as the most critical phase in the successful development of digital integrated circuits. The increasing complexity and size of chip designs make it more challenging to find bugs and meet test coverage goals in time for market demands. These challenges have led to more automated methods of simulation with constrained random test generation and coverage analysis. Recent goals in industry have focused on improving the process further by applying Coverage Directed Test Generation (CDG) to automate the feedback from coverage analysis to test input generation. Previous research has presented Bayesian networks as a way to achieve CDG. Bayesian networks provide a means of capturing behaviors of a design under verification and making predictions to help guide test input generation to reach coverage goals more quickly. Previous research has shown methods for defining a Bayesian network for a design domain and generating input parameters for dynamic simulation. This thesis demonstrates that existing commercial verification tools can be combined with a Bayesian inference engine as a feasible solution for creating a fully automated CDG environment. This solution is demonstrated using methods from previous research for applying Bayesian networks to verification. The CDG framework was implemented by combining the Questa verification platform with the Bayesian inference engine SMILE (Structural Modeling, Inference, and Learning Engine) in a single simulation environment. SystemVerilog testbenches and custom software were created to automatically find coverage holes, predict test input parameters, and dynamically change these parameters to complete verification with a fewer number of test cases. The CDG framework was demonstrated by performing verification on both a combinational logic design and a sequential logic design. The results show that Bayesian networks can be successfully used to improve the efficiency and quality of the verification process
    corecore