58 research outputs found

    Testbench qualification of SystemC TLM protocols through Mutation Analysis

    Get PDF
    Transaction-level modeling (TLM) has become the de-facto reference modeling style for system-level design and verification of embedded systems. It allows designers to implement high-level communication protocols for simulations up to 1000x faster than at register-transfer level (RTL). To guarantee interoperability between TLM IP suppliers and users, designers implement the TLM communication protocols by relying on a reference standard, such as the standard OSCI for SystemC TLM. Functional correctness of such protocols as well as their compliance to the reference TLM standard are usually verified through user-defined testbenches, which high-quality and completeness play a key role for an efficient TLM design and verification flow. This article presents a methodology to apply mutation analysis, a technique applied in literature for SW testing, for measuring the testbench quality in verifying TLM protocols. In particular, the methodology aims at (i) qualifying the testbenches by considering both the TLM protocol correctness and their compliance to a defined standard (i.e., OSCI TLM), (ii) optimizing the simulation time during mutation analysis by avoiding mutation redundancies, and (iii) driving the designers in the testbench improvement. Experimental results on benchmarks of different complexity and architectural characteristics are reported to analyze the methodology applicability

    A Cross-level Verification Methodology for Digital IPs Augmented with Embedded Timing Monitors

    Get PDF
    Smart systems are characterized by the integration in a single device of multi-domain subsystems of different technological domains, namely, analog, digital, discrete and power devices, MEMS, and power sources. Such challenges, emerging from the heterogeneous nature of the whole system, combined with the traditional challenges of digital design, directly impact on performance and on propagation delay of digital components. This article proposes a design approach to enhance the RTL model of a given digital component for the integration in smart systems with the automatic insertion of delay sensors, which can detect and correct timing failures. The article then proposes a methodology to verify such added features at system level. The augmented model is abstracted to SystemC TLM, which is automatically injected with mutants (i.e., code mutations) to emulate delays and timing failures. The resulting TLM model is finally simulated to identify timing failures and to verify the correctness of the inserted delay monitors. Experimental results demonstrate the applicability of the proposed design and verification methodology, thanks to an efficient sensor-aware abstraction methodology, by applying the flow to three complex case studies

    A Cross-level Verification Methodology for Digital IPs Augmented with Embedded Timing Monitors

    Get PDF
    Smart systems implement the leading technology advances in the context of embedded devices. Current design methodologies are not suitable to deal with tightly interacting subsystems of different technological domains, namely analog, digital, discrete and power devices, MEMS and power sources. The interaction effects between the components and between the environment and the system must be modeled and simulated at system level to achieve high performance. Focusing on digital subsystem, additional design constraints have to be considered as a result of the integration of multi-domain subsystems in a single device. The main digital design challenges combined with those emerging from the heterogeneous nature of the whole system directly impact on performance, hence propagation delay, of the digital component. In this paper we propose a design approach to enhance the RTL model of a given digital component for the integration in smart systems, and a methodology to verify the added features at system-level. The design approach consists of ``augmenting'' the RTL model through the automatic insertion of delay sensors, which are capable of detecting and correcting timing failures. The verification methodology consists of an automatic flow of two steps. Firstly the augmented model is abstracted to system-level (i.e., SystemC TLM); secondly mutants, which are code mutations to emulate timing failures, are automatically injected into the abstracted model. Experimental results demonstrate the applicability of the proposed design and verification methodology and the effectiveness of the simulation performance

    Standart-konformes Snapshotting für SystemC Virtuelle Plattformen

    Get PDF
    The steady increase in complexity of high-end embedded systems goes along with an increasingly complex design process. We are currently still in a transition phase from Hardware-Description Language (HDL) based design towards virtual-platform-based design of embedded systems. As design complexity rises faster than developer productivity a gap forms. Restoring productivity while at the same time managing increased design complexity can also be achieved through focussing on the development of new tools and design methodologies. In most application areas, high-level modelling languages such as SystemC are used in early design phases. In modern software development Continuous Integration (CI) is used to automatically test if a submitted piece of code breaks functionality. Application of the CI concept to embedded system design and testing requires fast build and test execution times from the virtual platform framework. For this use case the ability to save a specific state of a virtual platform becomes necessary. The saving and restoring of specific states of a simulation requires the ability to serialize all data structures within the simulation models. Improving the frameworks and establishing better methods will only help to narrow the design gap, if these changes are introduced with the needs of the engineers and developers in mind. Ultimately, it is their productivity that shall be improved. The ability to save the state of a virtual platform enables developers to run longer test campaigns that can even contain randomized test stimuli. If the saved states are modifiable the developers can inject faulty states into the simulation models. This work contributes an extension to the SoCRocket virtual platform framework to enable snapshotting. The snapshotting extension can be considered a reference implementation as the utilization of current SystemC/TLM standards makes it compatible to other frameworkds. Furthermore, integrating the UVM SystemC library into the framework enables test driven development and fast validation of SystemC/TLM models using snapshots. These extensions narrow the design gap by supporting designers, testers and developers to work more efficiently.Die stetige Steigerung der Komplexität eingebetteter Systeme geht einher mit einer ebenso steigenden Komplexität des Entwurfsprozesses. Wir befinden uns momentan in der Übergangsphase vom Entwurf von eingebetteten Systemen basierend auf Hardware-Beschreibungssprachen hin zum Entwurf ebendieser basierend auf virtuellen Plattformen. Da die Entwurfskomplexität rasanter steigt als die Produktivität der Entwickler, entsteht eine Kluft. Die Produktivität wiederherzustellen und gleichzeitig die gesteigerte Entwurfskomplexität zu bewältigen, kann auch erreicht werden, indem der Fokus auf die Entwicklung neuer Werkzeuge und Entwurfsmethoden gelegt wird. In den meisten Anwendungsgebieten werden Modellierungssprachen auf hoher Ebene, wie zum Beispiel SystemC, in den frühen Entwurfsphasen benutzt. In der modernen Software-Entwicklung wird Continuous Integration (CI) benutzt um automatisiert zu überprüfen, ob eine eingespielte Änderung am Quelltext bestehende Funktionalitäten beeinträchtigt. Die Anwendung des CI-Konzepts auf den Entwurf und das Testen von eingebetteten Systemen fordert schnelle Bau- und Test-Ausführungszeiten von dem genutzten Framework für virtuelle Plattformen. Für diesen Anwendungsfall wird auch die Fähigkeit, einen bestimmten Zustand der virtuellen Plattform zu speichern, erforderlich. Das Speichern und Wiederherstellen der Zustände einer Simulation erfordert die Serialisierung aller Datenstrukturen, die sich in den Simulationsmodellen befinden. Das Verbessern von Frameworks und Etablieren besserer Methodiken hilft nur die Entwurfs-Kluft zu verringern, wenn diese Änderungen mit Berücksichtigung der Bedürfnisse der Entwickler und Ingenieure eingeführt werden. Letztendlich ist es ihre Produktivität, die gesteigert werden soll. Die Fähigkeit den Zustand einer virtuellen Plattform zu speichern, ermöglicht es den Entwicklern, längere Testkampagnen laufen zu lassen, die auch zufällig erzeugte Teststimuli beinhalten können oder, falls die gespeicherten Zustände modifizierbar sind, fehlerbehaftete Zustände in die Simulationsmodelle zu injizieren. Mein mit dieser Arbeit geleisteter Beitrag beinhaltet die Erweiterung des SoCRocket Frameworks um Checkpointing Funktionalität im Sinne einer Referenzimplementierung. Weiterhin ermöglicht die Integration der UVM SystemC Bibliothek in das Framework die Umsetzung der testgetriebenen Entwicklung und schnelle Validierung von SystemC/TLM Modellen mit Hilfe von Snapshots

    Re-use of tests and arguments for assesing dependable mixed-critically systems

    Get PDF
    The safety assessment of mixed-criticality systems (MCS) is a challenging activity due to system heterogeneity, design constraints and increasing complexity. The foundation for MCSs is the integrated architecture paradigm, where a compact hardware comprises multiple execution platforms and communication interfaces to implement concurrent functions with different safety requirements. Besides a computing platform providing adequate isolation and fault tolerance mechanism, the development of an MCS application shall also comply with the guidelines defined by the safety standards. A way to lower the overall MCS certification cost is to adopt a platform-based design (PBD) development approach. PBD is a model-based development (MBD) approach, where separate models of logic, hardware and deployment support the analysis of the resulting system properties and behaviour. The PBD development of MCSs benefits from a composition of modular safety properties (e.g. modular safety cases), which support the derivation of mixed-criticality product lines. The validation and verification (V&V) activities claim a substantial effort during the development of programmable electronics for safety-critical applications. As for the MCS dependability assessment, the purpose of the V&V is to provide evidences supporting the safety claims. The model-based development of MCSs adds more V&V tasks, because additional analysis (e.g., simulations) need to be carried out during the design phase. During the MCS integration phase, typically hardware-in-the-loop (HiL) plant simulators support the V&V campaigns, where test automation and fault-injection are the key to test repeatability and thorough exercise of the safety mechanisms. This dissertation proposes several V&V artefacts re-use strategies to perform an early verification at system level for a distributed MCS, artefacts that later would be reused up to the final stages in the development process: a test code re-use to verify the fault-tolerance mechanisms on a functional model of the system combined with a non-intrusive software fault-injection, a model to X-in-the-loop (XiL) and code-to-XiL re-use to provide models of the plant and distributed embedded nodes suited to the HiL simulator, and finally, an argumentation framework to support the automated composition and staged completion of modular safety-cases for dependability assessment, in the context of the platform-based development of mixed-criticality systems relying on the DREAMS harmonized platform.La dificultad para evaluar la seguridad de los sistemas de criticidad mixta (SCM) aumenta con la heterogeneidad del sistema, las restricciones de diseño y una complejidad creciente. Los SCM adoptan el paradigma de arquitectura integrada, donde un hardware embebido compacto comprende múltiples plataformas de ejecución e interfaces de comunicación para implementar funciones concurrentes y con diferentes requisitos de seguridad. Además de una plataforma de computación que provea un aislamiento y mecanismos de tolerancia a fallos adecuados, el desarrollo de una aplicación SCM además debe cumplir con las directrices definidas por las normas de seguridad. Una forma de reducir el coste global de la certificación de un SCM es adoptar un enfoque de desarrollo basado en plataforma (DBP). DBP es un enfoque de desarrollo basado en modelos (DBM), en el que modelos separados de lógica, hardware y despliegue soportan el análisis de las propiedades y el comportamiento emergente del sistema diseñado. El desarrollo DBP de SCMs se beneficia de una composición modular de propiedades de seguridad (por ejemplo, casos de seguridad modulares), que facilitan la definición de líneas de productos de criticidad mixta. Las actividades de verificación y validación (V&V) representan un esfuerzo sustancial durante el desarrollo de aplicaciones basadas en electrónica confiable. En la evaluación de la seguridad de un SCM el propósito de las actividades de V&V es obtener las evidencias que apoyen las aseveraciones de seguridad. El desarrollo basado en modelos de un SCM incrementa las tareas de V&V, porque permite realizar análisis adicionales (por ejemplo, simulaciones) durante la fase de diseño. En las campañas de pruebas de integración de un SCM habitualmente se emplean simuladores de planta hardware-in-the-loop (HiL), en donde la automatización de pruebas y la inyección de faltas son la clave para la repetitividad de las pruebas y para ejercitar completamente los mecanismos de tolerancia a fallos. Esta tesis propone diversas estrategias de reutilización de artefactos de V&V para la verificación temprana de un MCS distribuido, artefactos que se emplearán en ulteriores fases del desarrollo: la reutilización de código de prueba para verificar los mecanismos de tolerancia a fallos sobre un modelo funcional del sistema combinado con una inyección de fallos de software no intrusiva, la reutilización de modelo a X-in-the-loop (XiL) y código a XiL para obtener modelos de planta y nodos distribuidos aptos para el simulador HiL y, finalmente, un marco de argumentación para la composición automatizada y la compleción escalonada de casos de seguridad modulares, en el contexto del desarrollo basado en plataformas de sistemas de criticidad mixta empleando la plataforma armonizada DREAMS.Kritikotasun nahastuko sistemen segurtasun ebaluazioa jarduera neketsua da beraien heterogeneotasuna dela eta. Sistema hauen oinarria arkitektura integratuen paradigman datza, non hardware konpaktu batek exekuzio plataforma eta komunikazio interfaze ugari integratu ahal dituen segurtasun baldintza desberdineko funtzio konkurrenteak inplementatzeko. Konputazio plataformek isolamendu eta akatsen aurkako mekanismo egokiak emateaz gain, segurtasun arauek definituriko jarraibideak jarraitu behar dituzte kritikotasun mistodun aplikazioen garapenean. Sistema hauen zertifikazio prozesuaren kostua murrizteko aukera bat plataformetan oinarritutako garapenean (PBD) datza. Garapen planteamendu hau modeloetan oinarrituriko garapena da (MBD) non modeloaren logika, hardware eta garapen desberdinak sistemaren propietateen eta portaeraren aurka aztertzen diren. Kritikotasun mistodun sistemen PBD garapenak etekina ateratzen dio moduluetan oinarrituriko segurtasun propietateei, adibidez: segurtasun kasu modularrak (MSC). Modulu hauek kritikotasun mistodun produktu-lerroak ere hartzen dituzte kontutan. Berifikazio eta balioztatze (V&V) jarduerek esfortzu kontsideragarria eskatzen dute segurtasun-kiritikoetarako elektronika programagarrien garapenean. Kritikotasun mistodun sistemen konfiantzaren ebaluazioaren eta V&V jardueren helburua segurtasun eskariak jasotzen dituzten frogak proportzionatzea da. Kritikotasun mistodun sistemen modelo bidezko garapenek zeregin gehigarriak atxikitzen dizkio V&V jarduerari, fase honetan analisi gehigarriak (hots, simulazioak) zehazten direlako. Bestalde, kritikotasun mistodun sistemen integrazio fasean, hardware-in-the-loop (Hil) simulazio plantek V&V iniziatibak sostengatzen dituzte non testen automatizazioan eta akatsen txertaketan funtsezko jarduerak diren. Jarduera hauek frogen errepikapena eta segurtasun mekanismoak egiaztzea ahalbidetzen dute. Tesi honek V&V artefaktuen berrerabilpenerako estrategiak proposatzen ditu, kritikotasun mistodun sistemen egiaztatze azkarrerako sistema mailan eta garapen prozesuko azken faseetaraino erabili daitezkeenak. Esate baterako, test kodearen berrabilpena akats aurkako mekanismoak egiaztatzeko, modelotik X-in-the-loop (XiL)-ra eta kodetik XiL-rako konbertsioa HiL simulaziorako eta argumentazio egitura bat DREAMS Europear proiektuan definituriko arkitektura estiloan oinarrituriko segurtasun kasu modularrak automatikoki eta gradualki sortzeko

    Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge.

    Full text link
    Today, design verification is by far the most resource and time-consuming activity of any new digital integrated circuit development. Within this area, the vast majority of the verification effort in industry relies on simulation platforms, which are implemented either in hardware or software. A "simulator" includes a model of each component of a design and has the capability of simulating its behavior under any input scenario provided by an engineer. Thus, simulators are deployed to evaluate the behavior of a design under as many input scenarios as possible and to identify and debug all incorrect functionality. Two features are critical in simulators for the validation effort to be effective: performance and checking/debugging capabilities. A wide range of simulator platforms are available today: on one end of the spectrum there are software-based simulators, providing a very rich software infrastructure for checking and debugging the design's functionality, but executing only at 1-10 simulation cycles per second (while actual chips operate at GHz speeds). At the other end of the spectrum, there are hardware-based platforms, such as accelerators, emulators and even prototype silicon chips, providing higher performances by 4 to 9 orders of magnitude, at the cost of very limited or non-existent checking/debugging capabilities. As a result, today, simulation-based validation is crippled: one can either have satisfactory performance on hardware-accelerated platforms or critical infrastructures for checking/debugging on software simulators, but not both. This dissertation brings together these two ends of the spectrum by presenting solutions that offer high-performance simulation with effective checking and debugging capabilities. Specifically, it addresses the performance challenge of software simulators by leveraging inexpensive off-the-shelf graphics processors as massively parallel execution substrates, and then exposing the parallelism inherent in the design model to that architecture. For hardware-based platforms, the dissertation provides solutions that offer enhanced checking and debugging capabilities by abstracting the relevant data to be logged during simulation so to minimize the cost of collection, transfer and processing. Altogether, the contribution of this dissertation has the potential to solve the challenge of digital design verification by enabling effective high-performance simulation-based validation.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99781/1/dchatt_1.pd

    Analog System-on-a-Chip with Application to Biosensors

    Get PDF
    This dissertation facilitates the design and fabrication of analog systems-on-a-chip (SoCs). In this work an analog SoC is developed with application to organic fluid analysis. The device contains a built-in self-test method for performing on-chip analysis of analog macros. The analog system-on-a-chip developed in this dissertation can be used to evaluate the properties of fluids for medical diagnoses. The research herein described covers the development of: analog SoC models, an improved set of chemical sensor arrays, a self-contained system-on-a-chip for the determination of fluid properties, and a method of performing on-chip testing of analog SoC sub-blocks

    RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model

    Full text link
    Inspired by the recent success of large language models (LLMs) like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark

    Automation scripts for generic synthesis and co-simulation for Xronos generated HDLs

    Get PDF
    With growing developments in applications such as multimedia, technologies has pushed boundaries in the demands and popularity which used for a wide range of application domains. However, as complexity of development process of these applications that involve time-consuming and error-prone tasks increases, it is often scaled-up with development of the devices on the market to supports these applications. ORCC has helped in reducing the complexity of application developments by providing a set of modern tools based on dataflow programming. However, though Xronos generated HDLs has proven to be useful, yet it does not optimized to the resource utilization, power and timing analysis. It also yet to have a wide coverage across High-Level Synthesis (HLS) tools such as Altera Quartus and Synopsys Design Compiler. These coverages include the codes generated can be synthesize with other HDL tools. The critical part of this paper is generating of generic HDL for Altera Quartus or Synopsys Design Compiler from original HDL generated from ORCC and to provide co-simulation, automation environments for the RVC-CAL framework of Altera Quartus’s testbench by extracting simulation data that available from RVC-CAL framework, in which can be used to verify data from both ends (RVC-CAL’s and Quartus’s). Apart from that, in order to fully test generated code, comparison will be made with Xilinx Vivado HLS to benchmark functional test that are made with different HLS tools. This paper present the automation scripts that helps to produce better optimization of resource, power, and timing analysis from Xronos generated HDLs and providing automatic testbench that can be tested with Xilinx Vivado and other HDL tools

    Systematische Transaction-Level-Kommunikations-Modellierung mit SystemC

    Get PDF
    An emerging approach to embedded system design is to assemble them from a library of hardware and software component models (IP, intellectual property) using a system description language, such as SystemC. SystemC allows describing the communication among IPs in terms of abstract operations (transactions). The promise is that with transaction-level modeling (TLM), future systems-on-chip with one billion transistors and more can be composed out of IPs as simply as playing with LEGO bricks. However, reality is far out. In fact, each IP vendor promotes another proprietary interface standard and the provided design tools lack compatibility, such that heterogeneous IPs cannot be integrated efficiently. A novel generic interconnect fabric for TLM is presented which aims at enabling inter-operation between models of different levels of abstraction (mixed-mode) and models with different interfaces (heterogeneous components), with as little overhead as possible. A generic, protocol independent representation of transactions is developed, among with an abstraction level formalism. This approach is shown to support systematic simulation of state-of-the-art buses and networks-on-chip such as IBM CoreConnect and PCI Express over several levels of TLM abstraction. A layered simulation framework for SystemC, GreenBus, is developed to examine the proposed concepts. The thesis discusses new implementation techniques for communication modeling with SystemC which outperform the existing approaches in terms of flexibility, simulation accuracy, and performance. Based on these techniques, advanced concepts for TLM-based hardware/software co-design and FPGA prototyping are examined. Several experiments and a video processor case study highlight the efficiency of the approach and show its applicability in a TLM design flow.Eingebettete Systeme werden zunehmend auf Basis vorgefertigter Hard- und Softwarebausteine entwickelt, die in Form von Modellen (IP, Intellectual Property) vorliegen. Hierzu werden Systembeschreibungssprachen wie SystemC eingesetzt. SystemC ermöglicht, die Kommunikation zwischen IPs durch abstrakte Operationen, sog. Transaktionen zu beschreiben. Mit dieser Transaction-Level-Modellierung (TLM) sollen auch zukünftige Systeme mit 1 Milliarde Transistoren und mehr effizient entwickelt werden können. Idealerweise sollte das Hantieren mit IPs dabei so einfach sein wie das Spielen mit LEGO-Steinen. In der Realität sind jedoch IPs unterschiedlicher Hersteller nicht ohne weiteres integrierbar, und auch die Entwurfswerkzeuge sind nicht kompatibel. In dieser Doktorarbeit wird ein neuer, generischer Ansatz für die Transaction-Level-Modellierung mit SystemC vorgestellt, der Kommunikation zwischen Modellen auf unterschiedlichen Abstraktionsebenen (Mixed-Mode) und mit unterschiedlichen Schnittstellen (heterogene Komponenten) möglich macht. Der zusätzlich benötigte Simulations- und Code-Aufwand ist minimal. Ein protokollunabhängiges Transaktionsmodell und ein formaler Ansatz zur Beschreibung von Abstraktionsebenen werden vorgestellt, mit denen verschiedenartige Busse und Networks-on-Chip wie IBM CoreConnect und PCI Express auf verschiedenen TLM-Abstraktionsebenen simuliert werden können. Ein modulares Simulationsframework für SystemC wird entwickelt (GreenBus), um die vorgeschlagenen Konzepte zu untersuchen. Anhand von GreenBus werden neue Implementierungstechniken diskutiert, die den existierenden Ansätzen in Flexibilität, Simulationsgenauigkeit und -geschwindigkeit überlegen sind. Die Vor- und Nachteile der entwickelten Techniken werden mit Experimenten belegt, und eine Videoprozessor-Fallstudie demonstriert die Effizienz des Ansatzes in einem TLM-basierten Entwurfsfluss
    corecore