186 research outputs found

    High-level verification flow for a high-level synthesis-based digital logic design

    Get PDF
    Abstract. High-level synthesis (HLS) is a method for generating register-transfer level (RTL) hardware description of digital logic designs from high-level languages, such as C/C++/SystemC or MATLAB. The performance and productivity benefits of HLS stem from the untimed, high abstraction level input languages. Another advantage is that the design and verification can focus on the features and high-level architecture, instead of the low-level implementation details. The goal of this thesis was to define and implement a high-level verification (HLV) flow for an HLS design written in C++. The HLV flow takes advantage of the performance and productivity of C++ as opposed to hardware description languages (HDL) and minimises the required RTL verification work. The HLV flow was implemented in the case study of the thesis. The HLS design was verified in a C++ verification environment, and Catapult Coverage was used for pre-HLS coverage closure. Post-HLS verification and coverage closure were done in Universal Verification Methodology (UVM) environment. C++ tests used in the pre-HLS coverage closure were reimplemented in UVM, to get a high initial RTL coverage without manual RTL code analysis. The pre-HLS C++ design was implemented as a predictor into the UVM testbench to verify the equivalence of C++ versus RTL and to speed up post-HLS coverage closure. Results of the case study show that the HLV flow is feasible to implement in practice. The flow shows significant performance and productivity gains of verification in the C++ domain when compared to UVM. The UVM implementation of a somewhat incomplete set of pre-HLS tests and formal exclusions resulted in an initial post-HLS coverage of 96.90%. The C++ predictor implementation was a valuable tool in post-HLS coverage closure. A total of four weeks of coverage work in pre- and post-HLS phases was required to reach 99% RTL coverage. The total time does not include the time required to build both C++ and UVM verification environments.Korkean tason verifiointivuo korkean tason synteesiin perustuvalle digitaalilogiikkasuunnitelmalle. Tiivistelmä. Korkean tason synteesi (HLS) on menetelmä, jolla generoidaan rekisterisiirtotason (RTL) laitteistokuvausta digitaalisille logiikkasuunnitelmille käyttäen korkean tason ohjelmointikieliä, kuten C-pohjaisia kieliä tai MATLAB:ia. HLS:n suorituskykyyn ja tuottavuuteen liittyvät hyödyt perustuvat ohjelmointikielien tarjoamaan korkeampaan abstraktiotasoon. HLS:ää käyttäen suunnittelu- ja varmennustyö voi keskittyä ominaisuuksiin ja korkean tason arkkitehtuuriin matalan tason yksityiskohtien sijaan. Tämän diplomityön tavoite oli määritellä ja implementoida korkean tason verifiointivuo (HLV-vuo) C++:lla kirjoitetulle HLS-suunnitelmalle. HLV-vuo hyödyntää ohjelmointikielien tarjoamaa suorituskykyä ja korkeampaa abstraktion tasoa kovonkuvauskielien sijaan ja siten minimoi RTL:n varmennukseen vaadittavaa työtä. HLV vuo implementoitiin tapaustutkimuksessa. HLS-suunnitelma varmennettiin C++ -verifiointiympäristössä, ja Catapult Coveragea käytettiin kattavuuden analysointiin. RTL-kattavuutta mitattiin universaalilla verifiointimetodologialla (UVM) tehdyssä ympäristössä. C++ varmennuksessa käytetyt testivektorit implementoitiin uudelleen UVM-ympäristössä, jotta RTL-kattavuuden lähtötaso olisi korkea ilman manuaalista RTL-analyysiä. C++-suunnitelma implementoitiin prediktorina (referenssimallina) UVM-testipenkkiin koodikattavuuden parantamiseksi. Tapaustutkimuksen tulokset osoittavat, että määritelty HLV-vuo on toteutettavissa käytännössä. Vuota käyttämällä saavutetaan merkittäviä suorituskyky- ja tuottavuusetuja C++ -testiympäristössä verrattuna UVM-ympäristöön. 90.60% koodikattavuuden saavuttavien C++ testivektoreiden uudelleenimplementoiti UVM-ympäristössä tuotti 96.90% RTL-kattavuuden. C++-predictorin implementointi oli merkittävä työkalu RTL-kattavuustavoitteen saavuttamisessa

    Integration and verification of parameterized register interfaces

    Get PDF
    Abstract. This thesis takes an in-depth look on parameterized register models, their generation and use. The aim is to discover improvements to the current method of generating parameterized register models. The thesis is divided into two halves: a practical section that consists of a study on the generation of parameterized register models, and a theory section that supports the topics gone over in the practical section. The practical section studied the generation flow and tools currently used at Nordic Semiconductor. The flow was analyzed to discover changes that would enable the generation of more flexible parameterized register models. The suggested changes were then used to generate a dynamic register model for a highly configurable intellectual property (IP) core. The register model was validated using a register test sequence and functional tests. Finally, the functionality of the generated register model was compared to a manually implemented model. In the end, the test sequences and functional tests passed without errors. The generated register model could be configured directly from the testbench without editing the model manually. This also meant that the applied configurations would not be lost even if the register model were to be regenerated. The resulting register model was significantly more flexible than the previous generated models.Parametrisoitujen rekisterirajapintojen integrointi ja verifiointi. Tiivistelmä. Tässä opinnäytetyössä tutustutaan parametrisoituihin rekisterimalleihin, niiden generointiin, ja niiden käyttöön. Tavoitteena on löytää parannuksia nykyiseen parametrisoitujen rekisterimallien generointitapaan. Opinnäytetyö on jaettu kahteen puoliskoon: käytännön osuuteen, joka koostuu parametrisoitujen rekisterimallien tutkimuksesta, ja teoreettisesta osuudesta, joka tukee käytännön osuudessa käsiteltyjä aiheita. Käytännön osuus tutki Nordic Semiconductorilla tällä hetkellä rekisterimallin generointiin käytettyjä prosesseja ja työkaluja. Niitä analysoimalla pyrittiin löytämään muutoksia, joiden avulla voisi generoida joustavampia parametrisoituja rekisterimalleja. Kyseisten muutosten avulla generoitiin sitten dynaaminen rekisterimalli IP lohkolle, joka sisältää paljon konfiguroitavia parametrejä. Generoitu malli varmennettiin rekisterien testisekvenssillä ja toiminnallisilla testeillä. Lopuksi rekisterimallin toiminnallisuutta verrattiin käsin kirjoitetun rekisterimallin toiminnallisuuteen. Testisekvenssi ja toiminnalliset testit läpäistiin simuloinnissa lopulta ilman virheitä. Generoitu rekisterimalli oli konfiguroitavissa suoraan testipenkistä, eikä sitä tarvinnut muokata manuaalisesti. Tämä tarkoitti myös sitä, että testipenkissä asetettuja konfiguraatioita ei menetetä, jos rekisterimalli generoidaan uudelleen. Lopullinen rekisterimalli oli merkittävästi joustavampi kuin aikaisemmat generoidut mallit

    SoC regression strategy developement

    Get PDF
    Abstract. The objective of the verifcation process of hardware is ensuring that the design does not contain any functional errors. Verifying the correct functionality of a large System-on-Chip (SoC) is a co-design process that is performed by running immature software on immature hardware. Among the key objectives is to ensure the completion of the design before proceeding to fabrication. Verification is performed using a mix of software simulations that imitate the hardware functions and emulations executed on reconfigurable hardware. Both techniques are time-consuming, the software running perhaps at a billionth and the emulation at thousands of times slower than the targeted system. A good verification strategy reduces the time to market without compromising the testing coverage. This thesis compares regression verification strategies for a large SoC project. These include different techniques of test case selection, test case prioritization that have been researched in software projects. There is no single strategy that performs well in SoC throughout the whole development cycle. In the early stages of development time based test case prioritization provides the fastest convergence. Later history based test case prioritization and risk based test case selection gave a good balance between coverage, error detection, execution time, and foundations to predict the time to completion

    Accelerating Coverage Closure For Hardware Verification Using Machine Learning

    Get PDF
    Functional verification is used to confirm that the logic of a design meets its specification. The most commonly used method for verifying complex designs is simulation-based verification. The quality of simulation-based verification is based on the quality and diversity of the tests that are simulated. However, it is time consuming and compute intensive on account of the fact that a large volume of tests must be simulated to exhaustively exercise the design functionality in order to find and fix logic bugs. A common measure of success of this exercise is in the form of a metric known as functional coverage. Coverage is typically indicated as a percentage of functionality covered by the test suite. This thesis proposes a novel methodology to construct a model using SVM, Gradient Boosting Classifier and Neural Networks aimed at replacing random test generation for speeding up coverage collection

    Stimulus Optimization in Hardware Verification Using Machine-Learning

    Get PDF
    Simulation-based functional verification is a commonly used technique for hardware verification, with the goal of exercising critical scenarios in the design, detecting and fixing bugs, and achieving close to 100% of the coverage targets required for tape-out. As chip complexity continues to grow, functional verification is also becoming a bottleneck for the overall chip design cycle. The primary goal is to shorten the time taken for functional coverage convergence in the volume verification phase, which in return, accelerates the bug detection in the design. In this thesis, I have investigated the application of machine learning towards this objective. I accessed the machine learning-guided stimulus generation with two approaches: coarse-grained test-level optimization and fine-grained transaction-level optimization. The effectiveness of machine learning was first confirmed on test-level optimization, which rests on achieving full coverage for a certain group of functional coverage metrics in reduced time with a minimal number of simulated tests. It was observed that test-level optimization was limited to some common functional coverage metrics. This was the motivation to explore and implement transaction-level optimization in two novel ways: transaction pruning and directed sequence generation for accelerated functional coverage closure. These techniques were applied on FSM (Finite State Machine) and Non-FSM based coverage metrics and compared the gains using different ML classifiers. Experimental results showed that the fine-grained implementation can potentially reduce the overall CPU time for the verification coverage closure; thus, I propose that complementary application of both the levels of stimulus optimization is the recommended path for efficiency improvements in functional verification coverage convergence

    Survey on Machine Learning Algorithms Enhancing the Functional Verification Process

    Get PDF
    The continuing increase in functional requirements of modern hardware designs means the traditional functional verification process becomes inefficient in meeting the time-to-market goal with sufficient level of confidence in the design. Therefore, the need for enhancing the process is evident. Machine learning (ML) models proved to be valuable for automating major parts of the process, which have typically occupied the bandwidth of engineers; diverting them from adding new coverage metrics to make the designs more robust. Current research of deploying different (ML) models prove to be promising in areas such as stimulus constraining, test generation, coverage collection and bug detection and localization. An example of deploying artificial neural network (ANN) in test generation shows 24.5× speed up in functionally verifying a dual-core RISC processor specification. Another study demonstrates how k-means clustering can reduce redundancy of simulation trace dump of an AHB-to-WHISHBONE bridge by 21%, thus reducing the debugging effort by not having to inspect unnecessary waveforms. The surveyed work demonstrates a comprehensive overview of current (ML) models enhancing the functional verification process from which an insight of promising future research areas is inferred

    Case Study: First-Time Success ASIC Design Methodology Applied to a Multi-Processor System-on-Chip

    Get PDF
    Achieving first-time success is crucial in the ASIC design league considering the soaring cost, tight time-to-market window, and competitive business environment. One key factor in ensuring first-time success is a well-defined ASIC design methodology. Here we propose a novel ASIC design methodology that has been proven for the RUMPS401 (Rahman University Multi-Processor System 401) Multiprocessor System-on-Chip (MPSoC) project. The MPSoC project is initiated by Universiti Tunku Abdul Rahman (UTAR) VLSI design center. The proposed methodology includes the use of Universal Verification Methodology (UVM). The use of electronic design automation (EDA) software during each step of the design methodology is also presented. The first-time success RUMPS401 demonstrates the use of the proposed ASIC design methodology and the good of using one. Especially this project is carried on in educational environment that is even more limited in budget, resources and know-how, compared to the business and industrial counterparts. Here a novel ASIC design methodology that is tailored to first-time success MPSoC is presented

    UVM Verification of an I2C Master Core

    Get PDF
    With the increasing complexity of IP designs, verification has become quite popular yet is still a significant challenge for a verification engineer. A proper verification environment can bring out bugs that one may never expect in the design. On the contrary, a poorly designed verification environment could give false information about the functioning of the design and bugs may appear on the consumer’s end. Hence, the verification industry is continually looking for more efficient verification methodologies. This paper describes one such efficient methodology implemented on an Inter-Integrated Circuit (I2C) system. I2C packs in itself the powerful features of the Serial Peripheral Interface (SPI) and the universal asynchronous receiver-transmitter (UART), but is comparatively more efficient and uses less hardware for implementation. Also, it can establish secure communication between multiple masters and multiple slaves with minimal wiring. In this project, from a design perspective, the master is a hardware block, and the slave is a verification IP. The methodology used for verification is based on the Universal Verification Methodology (UVM), a class library written in the SystemVerilog language. The paper describes how the verification of an I2C system uses the powerful tools of UVM. The master core has been successfully verified and the coverage goals are met. The effort has been documented in this paper in detail

    Acceleration of hardware code coverage closure using machine learning

    Get PDF
    Abstract. With the ever-increasing system-on-chip (SoC) design complexity, the verification of such systems is becoming more and more challenging and extremely time consuming. Hence, the human efforts put in this task seem neither to be sufficient, nor efficient enough anymore to maintain a good pace with the escalating market demands. In this work, we will present a descent way of utilizing machine learning (ML) for reducing the overhead of hardware design verification in terms of resources consumption. Our focus in this thesis is especially about the time spent on coverage closure that usually occupies a great deal of the whole verification time. Both deep learning (DL) and reinforcement learning (RL) are deployed for this purpose, in two different experiments, in order to come out with the most coherent way to accomplish the coverage closure task. On one hand, neural networks (NNs) were used to help visualize whether a stimulus is worth to run the simulation with, by predicting the coverage number that it would generate. On the other hand, Q-learning was used to predict the minimal set of tests needed to reach some code coverage goal, by optimizing and reducing the set of tests while still achieving the same coverage levels. The results of these experiments show captivating findings. First, the root mean square error (RMSE) of the neural network models was about 3 and 5 in predicting two different coverage values, respectively, which is quite good for a training run on a small dataset. Second, our Q-agent was able to do better than the coverage ranking utility of the simulation tool by almost 43%, where it reduced the number of tests from 63, as suggested by the simulator, to 36. This should remarkably reduce the required number of simulations in weekly regressions, hence result in a huge gain in time and resources. Both of these approaches aim at reducing the engineers’ efforts through accelerating the verification process and automating it, which frees some of the engineers’ time and allow them to focus on more important matters
    corecore