28 research outputs found

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin

    Hardware Accelerated Functional Verification

    Get PDF
    Funkční verifikace je jednou z nejrozšířenějších technik ověřování korektnosti hardwarových systémů podle jejich specifikace. S nárůstem složitosti současných systémů se zvyšují i časové požadavky kladené na funkční verifikaci, a proto je důležité hledat nové techniky urychlení tohoto procesu. Teoretická část této práce popisuje základní principy různých verifikačních technik, jako jsou simulace a testování, funkční verifikace, jakož i formální analýzy a verifikace. Následuje popis tvorby verifikačních prostředí nad hardwarovými komponentami v jazyce SystemVerilog. Část věnující se analýze popisuje požadavky kladené na systém pro akceleraci funkční verifikace, z nichž nejdůležitější jsou možnost jednoduchého spuštění akcelerované verze verifikace a časová ekvivalence akcelerovaného a neakcelerovaného běhu verifikace. Práce dále představuje návrh verifikačního rámce používajícího pro akceleraci běhů verifikace technologii programovatelných hradlových polí se zachováním možnosti spuštění běhu verifikace v uživatelsky přívětivém ladicím prostředí simulátoru. Dle experimentů provedených na prototypové implementaci je dosažené zrychlení úměrné počtu ověřovaných transakcí a komplexnosti verifikovaného systému, přičemž nejvyšší zrychlení dosažené v sadě experimentů je více než 130násobné.Functional verification is a widespread technique to check whether a hardware system satisfies a given correctness specification. The complexity of modern computer systems is rapidly rising and the verification process takes a significant amount of time. It is a challenging task to find appropriate acceleration techniques for this process. In this thesis, we describe theoretical principles of different verification approaches such as simulation and testing, functional verification, and formal analysis and verification. In particular, we focus on creating verification environments in the SystemVerilog language. The analysis part describes the requirements on a system for acceleration of functional verification, the most important being the option to easily enable acceleration and time equivalence of an accelerated and a non-accelerated run of a verification. The thesis further introduces a design of a verification framework that exploits the field-programmable gate array technology, while retaining the possibility to run verification in the user-friendly debugging environment of a simulator. According to the experiments carried out on a prototype implementation, the achieved acceleration is proportional to the number of checked transactions and the complexity of the verified system. The maximum acceleration achieved on the set of experiments was over 130 times.

    Understanding multidimensional verification: Where functional meets non-functional

    Get PDF
    Abstract Advancements in electronic systems' design have a notable impact on design verification technologies. The recent paradigms of Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) assume devices immersed in physical environments, significantly constrained in resources and expected to provide levels of security, privacy, reliability, performance and low-power features. In recent years, numerous extra-functional aspects of electronic systems were brought to the front and imply verification of hardware design models in multidimensional space along with the functional concerns of the target system. However, different from the software domain such a holistic approach remains underdeveloped. The contributions of this paper are a taxonomy for multidimensional hardware verification aspects, a state-of-the-art survey of related research works and trends enabling the multidimensional verification concept. Further, an initial approach to perform multidimensional verification based on machine learning techniques is evaluated. The importance and challenge of performing multidimensional verification is illustrated by an example case study

    Formal Verification of a MESI-based Cache Implementation

    Get PDF
    Cache coherency is crucial to multi-core systems with a shared memory programming model. Coherency protocols have been formally verified at the architectural level with relative ease. However, several subtle issues creep into the hardware realization of cache in a multi-processor environment. The assumption, made in the abstract model, that state transitions are atomic, is invalid for the HDL implementation. Each transition is composed of many concurrent multi-core operations. As a result, even with a blocking bus, several transient states come into existence. Most modern processors optimize communication with a split-transaction bus, this results in further transient states and race conditions. Therefore, the design and verification of cache coherency is increasingly complex and challenging. Simulation techniques are insufficient to ensure memory consistency and the absence of deadlock, livelock, and starvation. At best, it is tediously complex and time consuming to reach confidence in functionality with simulation. Formal methods are ideally suited to identify the numerous race conditions and subtle failures. In this study, we perform formal property verification on the RTL of a multi-core level-1 cache design based on snooping MESI protocol. We demonstrate full-proof verification of the coherence module in JasperGold using complexity reduction techniques through parameterization. We verify that the assumptions needed to constrain inputs of the stand-alone cache coherence module are satisfied as valid assertions in the instantiation environment. We compare results obtained from formal property verification against a state-of-the-art UVM environment. We highlight the benefits of a synergistic collaboration between simulation and formal techniques. We present formal analysis as a generic toolkit with numerous usage models in the digital design process

    Automated functional coverage driven verification with Universal Verification Methodology

    Get PDF
    Abstract. In this Master’s thesis, the validity of Universal Verification Methodology in digital design verification is studied. A brief look into the methodology’s history is taken, and its unique properties and object-oriented features are presented. Important coverage topics in project planning are discussed, and the two main types of coverage, code and functional coverage, are explained and the methods how they are captured are presented. The practical section of this thesis shows the implementation of a monitoring environment and an Universal Verification Methodology environment. The monitoring environment includes class-based components that are used to collect functional coverage from an existing SystemVerilog test bench. The Universal Verification Methodology environment uses the same monitoring system, but a different driving setup to stress the design under test. Coverage and simulation performance values are extracted and from all test benches and the data is compared. The results indicate that the Universal Verification Methodology environment incorporating constrained random stimulus is capable of faster simulation run times and better code coverage values. The simulation time measured was up to 26 % faster compared to a module-based environment.Automaattinen toiminnallisen kattavuuden ohjaama verifiointi universaalilla varmennusmenetelmällä. Tiivistelmä. Tässä diplomityössä tutkitaan universaalin varmennusmenetelmän (Universal Verification Methodology) soveltuvuutta digitaalisten laitteiden verifiointiin. Työssä tehdään lyhyt katsaus menetelmän historiaan. Lisäksi menetelmän omia ainutlaatuisia ja olio-pohjaisia ominaisuuksia käydään läpi. Kattavuuteen liittyviä käsitteitä esitetään projektihallinnan näkökulmasta. Kattavuudesta käsitellään toiminnallinen ja koodikattavuus, ja tavat, miten näitä molempia kerätään simulaatioista. Työn käytännön osuudessa esitetään monitorointiympäristön ja universaalin varmennusmenetelmän pohjalta tehdyn ympäristön toteutus. Monitorointi-ympäristössä on luokkapohjaisia komponentteja, joiden avulla kerätään toiminnallista kattavuutta jo olemassa olevasta testipenkistä. Universaalin varmennusmenetelmän pohjalta tehdyssä ympäristössä on samojen monitorointikomponenttien lisäksi testattavan kohteen ohjaamiseen vaadittavia komponentteja. Eri testipenkeistä kerätään kattavuuteen ja suorituskykyyn liittyvää dataa vertaamista varten. Tulokset viittaavat siihen, että rajoitettua satunnaista herätettä hyödykseen käyttävät universaalit varmennusmenetelmäympäristöt pääsevät nopeampiin suoritusaikoihin ja parempiin koodikattavuuslukuihin. Simulaation suoritusaikaan saatiin parhaassa tapauksessa jopa 26 % parannus

    A High Performance Advanced Encryption Standard (AES) Encrypted On-Chip Bus Architecture for Internet-of-Things (IoT) System-on-Chips (SoC)

    Get PDF
    With industry expectations of billions of Internet-connected things, commonly referred to as the IoT, we see a growing demand for high-performance on-chip bus architectures with the following attributes: small scale, low energy, high security, and highly configurable structures for integration, verification, and performance estimation. Our research thus mainly focuses on addressing these key problems and finding the balance among all these requirements that often work against each other. First of all, we proposed a low-cost and low-power System-on-Chips (SoCs) architecture (IBUS) that can frame data transfers differently. The IBUS protocol provides two novel transfer modes – the block and state modes, and is also backward compatible with the conventional linear mode. In order to evaluate the bus performance automatically and accurately, we also proposed an evaluation methodology based on the standard circuit design flow. Experimental results show that the IBUS based design uses the least hardware resource and reduces energy consumption to a half of an AMBA Advanced High-Performance Bus (AHB) and Advanced eXensible Interface (AXI). Additionally, the valid bandwidth of the IBUS based design is 2.3 and 1.6 times, respectively, compared with the AHB and AXI based implementations. As IoT advances, privacy and security issues become top tier concerns in addition to the high performance requirement of embedded chips. To leverage limited resources for tiny size chips and overhead cost for complex security mechanisms, we further proposed an advanced IBUS architecture to provide a structural support for the block-based AES algorithm. Our results show that the IBUS based AES-encrypted design costs less in terms of hardware resource and dynamic energy (60.2%), and achieves higher throughput (x1.6) compared with AXI. Effectively dealing with the automation in design and verification for mixed-signal integrated circuits is a critical problem, particularly when the bus architecture is new. Therefore, we further proposed a configurable and synthesizable IBUS design methodology. The flexible structure, together with bus wrappers, direct memory access (DMA), AES engine, memory controller, several mixed-signal verification intellectual properties (VIPs), and bus performance models (BPMs), forms the basic for integrated circuit design, allowing engineers to integrate application-specific modules and other peripherals to create complex SoCs

    Design and Verification of a Dual Port RAM Using UVM Methodology

    Get PDF
    Data-intensive applications such as Deep Learning, Big Data, and Computer Vision have resulted in more demand for on-chip memory storage. Hence, state of the art Systems on Chips (SOCs) have a memory that occupies somewhere between 50% to 90 % of the die space. Extensive Research is being done in the field of memory technology to improve the efficiency of memory packaging. This effort has not always been successful because densely packed memory structures can experience defects during the fabrication process. Thus, it is critical to test the embedded memory modules once they are taped out. Along with testing, functional verification of a module makes sure that the design works the way it has been intended to perform. This paper proposes a built-in self-test (BIST) to validate a Dual Port Static RAM module and a complete layered test bench to verify the module’s operation functionally. The BIST has been designed using a finite state machine and has been targeted against most of the general SRAM faults in a given linear time constraint of O(23n). The layered test bench has been designed using Universal Verification Methodology (UVM), a standardized class library which has increased the re-usability and automation to the existing design verification language, SystemVerilog

    SystemVerilog Verification of Wishbone-Compliant Serial Peripheral Interface

    Get PDF
    Synchronous serial interfaces provide economical on-board communication between the processor, digital to analog and analog to digital converters, memory, and other building blocks on the chip. A number of Integrated Circuit (IC) manufacturers develop and produce components that are compatible with serial interfaces. The common serial interfaces include Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), Universal Asynchronous Receiver Transmitter (UART), and Universal Serial Bus (USB). SPI is widely used and advantageous over other serial interfaces due to its features of simplicity, low cost, clock synchronous, and non-interrupting high-speed data transfer rate. SPI is the core controller of the design. An open source hardware computer bus Wishbone is selected as the host controller enabling parallel data exchange for faster communication. Both the hardware buses employ a master-slave configuration which makes the bus-interfacing easier. This research presents a verification environment using SystemVerilog for the SPI Master device. An existing design from Open Cores is re-used, described as per latest specifications in Verilog at the Register Transfer Level (RTL) and is in conformity with the design-reuse methodology. This paper provides an understanding of the verification, layered test benches, Object-Oriented Technology (OOT), SystemVerilog, SPI features, SPI advantages, disadvantages, and applications, SPI data transmission and transfer formats, SPI registers, SPI sub-system and Wishbone-SPI architecture, and the test bench development methodology. The focus is to understand how OOT and SystemVerilog improve productivity and functional coverage in a verification environment by the use of different constructs, constrained-random techniques, coverage, and assertions. A test bench was developed to verify the SPI master core. Testbench components include a random transaction generator, a Wishbone driver, an SPI master as the design under test, a receiver as the SPI slave, a monitor with tasks to monitor the host and the core, test cases, and a scoreboard to record metrics, assertions and store expected and actual data

    Design and Verification Environment for High-Performance Video-Based Embedded Systems

    Get PDF
    In this dissertation, a method and a tool to enable design and verification of computation demanding embedded vision-based systems is presented. Starting with an executable specification in OpenCV, we provide subsequent refinements and verification down to a system-on-chip prototype into an FPGA-Based smart camera. At each level of abstraction, properties of image processing applications are used along with structure composition to provide a generic architecture that can be automatically verified and mapped to the lower abstraction level. The result is a framework that encapsulates the computer vision library OpenCV at the highest level, integrates Accelera\u27s System-C/TLM with UVM and QEMU-OS for virtual prototyping and verification and mapping to a lower level, the last of which is the FPGA. This will relieve hardware designers from time-consuming and error-prone manual implementations, thus allowing them to focus on other steps of the design process. We also propose a novel streaming interface, called Component Interconnect and Data Access (CIDA), for embedded video designs, along with a formal model and a component composition mechanism to cluster components in logical and operational groups that reduce resource usage and power consumption

    An expert system for checking the correctness of memory systems using simulation and metamorphic testing

    Full text link
    During the last few years, computer performance has reached a turning point where computing power is no longer the only important concern. This way, the emphasis is shifting from an exclusive focus on the optimisation of the computing system to optimising other systems, like the memory system. Broadly speaking, testing memory systems entails two main challenges: the oracle problem and the reliable test set problem. The former consists in deciding if the outputs of a test suite are correct. The latter refers to providing an appropriate test suite for determining the correctness of the system under test. In this paper we propose an expert system for checking the correctness of memory systems. In order to face these challenges, our proposed system combines two orthogonal techniques – simulation and metamorphic testing – enabling the automatic generation of appropriate test cases and deciding if their outputs are correct. In contrast to conventional expert systems, our system includes a factual database containing the results of previous simulations, and a simulation platform for computing the behaviour of memory systems. The knowledge of the expert is represented in the form of metamorphic relations, which are properties of the analysed system involving multiple inputs and their outputs. Thus, the main contribution of this work is two-fold: a method to automatise the testing process of memory systems, and a novel expert system design focusing on increasing the overall performance of the testing process. To show the applicability of our system, we have performed a thorough evaluation using 500 memory configurations and 4 di erent memory management algorithms, which entailed the execution of more than one million of simulations. The evaluation used mutation testing, injecting faults in the memory management algorithms. The developed expert system was able to detect over 99% of the critical injected faults, hence obtaining very promising results, and outperforming other standard techniques like random testingThis work was supported by the Spanish Ministerio de Economía, Industria y Competitividad, Gobierno de España/FEDER (grant numbers DArDOS, TIN2015-65845-C3-1-R and FAME, RTI2018-093608-B-C31) and the Comunidad de Madrid project FORTE under Grant S2018/TCS-4314. The first author is also supported by the Universidad Complutense de Madrid - Santander Universidades grant (CT17/17-CT18/17
    corecore