6,409 research outputs found

    Determining Cases of Scenarios to Improve Coverage in Simulation-based Verification

    Full text link
    Abstract—Functional verification of complex designs is still dominated by simulation-based approaches. In particular, Coverage-driven Verification (CDV) is well acknowledged and ap-plied in industry. Here, verification gaps in terms of inadequately checked scenarios are addressed and closed by generating and applying dedicated stimuli. In order to ensure a good coverage and, by this, a high verification quality, each scenario is supposed to become sufficiently triggered. However, the considered scenario may be triggered in several fashions and information about that is hardly available in the existing CDV approaches. In this work, we propose an approach which automatically derives this information. Examples and experimental evaluations illustrate how this improves coverage in simulation-based verification. I

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin

    Towards more Dependable Verification of Mixed-Signal Systems

    Get PDF
    The verification of complex mixed-signal systems is a challenge, especially considering the impact of parameter variations. Besides the established approaches like Monte-Carlo or Corner-Case simulation, a novel semi-symbolic approach emerged in recent years. In this approach, parameter variations and tolerances are maintained as symbolic ranges during numerical simulation runs by using affine arithmetic. Maintaining parameter variations and tolerances in a symbolic way significantly increases verification coverage. In the following we give a brief introduction and an overview of research on semi-symbolic simulation of both circuits and systems and discuss possible application for system level verification and optimization

    Integration of a Digital Built-in Self-Test for On-Chip Memories

    Get PDF
    The ability of testing on-chip circuitry is extremely essential to ASIC implemen- tations today. However, providing functional tests and verification for on-chip (embedded) memories always poses a huge number of challenges to the designer. Therefore, a co-existing automated built-in self-test block with the Design Under Test (DUT) seems crucial to provide comprehensive, efficient and robust testing features. The target DUT of this thesis project is the state-of-the-arts Ultra Low Power (ULP) dual-port SRAMs designed in ASIC group of EIT department at Lund University. This thesis starts from system RTL modeling and verification from an earlier project, and then goes through ASIC design phase in 28 nm FD-SOI technology from ST-Microelectronics. All scripts during the ASIC design phase are developed in TCL. This design is implemented with multiple power domains (using CPF approach and introducing level-shifters at crossing-points between domains) and multiple clock sources in order to make it possible to perform various measurements with a high reliability on different flavours of a dual-port SRAM.This design is able to reduce dramatically the complexity of verification and measurement to integrated memories. This digital integrated circuit (IC) is developed as an application-specific IC (ASIC) chip for functional verification of integrated memories and measuring them in different aspects such as power consumption. The design is automated and capable of being reconfigured easily in terms of required actions and data for testing on-chip memories. Put it in other words, this design has automated and optimized the generation of what data to be stored on which location on memories as well as how they have been treated and interpreted later on. For instance, it refreshes and delivers different operation modes and working patterns to the entire test system in order to fully utilize integrated memories, of which such an automation is instructed by the stimuli to the chip. Besides, the pattern generation of the stimuli is implemented on MATLAB in an automated way. Due to constant advancements in chip manufacturing technology, more devices are squeezed into the same silicon area. Meaning that in order to monitor more internal signals introduced by the increased complexity of the circuits, more dedicated input/output ports (the physical interface between the chip internal signals and outside world) are required, that makes the chip bonding and testing in the future difficult and time-consuming. Additionally, memories usually have a bigger number of pins for signal reactions than other circuit blocks do, the method of dealing with so many pins should also be taken into account. Thus, a few techniques are adopted in this system to assist the designers deal with all mentioned issues. Once the ASIC chip has been fabricated (manufactured) and bonded, the on-chip memories can be tested directly on a printed circuit board in a simple and flexible way: Once test instruction input is loaded into the chip, the system starts to update the system settings and then to generate the internal configurations(parameters) so that all different operations, modes or instructions related to memory testing are automatically processed

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Enhanced Formal Verification Flow for Circuits Integrating Debugging and Coverage Analysis

    Get PDF
    In this paper we briefly review techniques used in formal hardware verification. An advanced flow emerges from integrating two major methodological improvements: debugging support and coverage analysis. The verification engineer can locate the source of a failure with an automatic debugging support. Components are identified which explain the discrepancy between the property and the circuit behavior.This method is complemented by an approach to analyze functional coverage of the proven Bounded Model Checking(BMC) properties. The approach automatically determines whether the property set is complete or not. In the latter case coverage gaps are returned. Both techniques are integrated in an enhanced verification flow. A running example demonstrates the resulting advantages

    Optimization of Cell-Aware Test

    Get PDF

    Optimization of Cell-Aware Test

    Get PDF
    corecore