2,889 research outputs found

    From FPGA to ASIC: A RISC-V processor experience

    Get PDF
    This work document a correct design flow using these tools in the Lagarto RISC- V Processor and the RTL design considerations that must be taken into account, to move from a design for FPGA to design for ASIC

    Analysis and optimization of a debug post-silicon hardware architecture

    Get PDF
    The goal of this thesis is to analyze the post-silicon validation hardware infrastructure implemented on multicore systems taking as an example Esperanto Technologies SoC, which has thousands of RISC-V processors and targets specific software applications. Then, based on the conclusions of the analysis, the project proposes a new post-silicon debug architecture that can fit on any System on-Chip without depending on its target application or complexity and that optimizes the options available on the market for multicore systems

    System-on-Chip Design and Test with Embedded Debug Capabilities

    Get PDF
    In this project, I started with a System-on-Chip platform with embedded test structures. The baseline platform consisted of a Leon2 CPU, AMBA on-chip bus, and an Advanced Encryption Standard decryption module. The basic objective of this thesis was to use the embedded reconfigurable logic blocks for post-silicon debug and verification. The System-on-Chip platform was designed at the register transistor level and implemented in a 180-nm IBM process. Test logic instrumentation was done with DAFCA (Design Automation for Flexible Chip Architecture) Inc. pre-silicon tools. The design was then synthesized using the Synopsys Design Compiler and placed and routed using Cadence SOC Encounter. Total transistor count is about 3 million, including 1400K transistors for the debug module serving as on chip logic analyzer. Core size of the design is about 4.8mm x 4.8mm and the system is working at 151MHz. Design verification was done with Cadence NCSim. The controllability and observability of internal signals of the design is greatly increased with the help of pre-silicon tools which helps locate bugs and later fix them with the help of post-silicon tools. This helps prevent re-spins on several occasions thus saving millions of dollars. Post-silicon tools have been used to program assertions and triggers and inject numerous personalities into the reconfigurable fabric which has greatly increased the versatility of the circuit

    The Design of a Debugger Unit for a RISC Processor Core

    Get PDF
    Recently, there has been a significant increase in design complexity for Embedded Systems often referred to as Hardware Software Co-Design. Complexity in design is due to both hardware and firmware closely coupled together in-order to achieve features for low power, high performance and low area. Due to these demands, embedded systems consist of multiple interconnected hardware IPs with complex firmware algorithms running on the device. Often such designs are available in bare-metal form, i.e without an Operating System, which results in difficulty while debugging due to lack of insight into the system. As a result, development cycle and time to market are increased. One of the major challenges for bare-metal design is to capture internal data required during debugging or testing in the post silicon validation stage effectively and efficiently. Post-silicon validation can be performed by leveraging on different technologies such as hardware software co-verification using hardware accelerators, FPGA emulation, logic analyzers, and so on which reduces the complete development cycle time. This requires the hardware to be instrumented with certain features which support debugging capabilities. As there is no standard for debugging capabilities and debugging infrastructure, it completely depends on the manufacturer to manufacturer or designer to designer. This work aims to implement minimum required features for debugging a bare-metal core by instrumenting the hardware compatible for debugging. It takes into consideration the fact that for a single core bare-metal embedded systems silicon area is also a constraint and there must be a trade-off between debugging capabilities which can be implemented in hardware and portions handled in software. The paper discusses various debugging approaches developed and implemented on various processor platforms and implements a new debugging infrastructure by instrumenting the Open-source AMBER 25 core with a set of debug features such as breakpoints, current state read, trace and memory access. Interface between hardware system and host system is designed using a JTAG standard TAP controller. The resulting design can be used in debugging and testing during post silicon verification and validation stages. The design is synthesized using Synopsys Design Compiler targeting a 65 nm technology node and results are compared for the instrumented and non-instrumented system

    Security of Systems on Chip

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.In recent years, technology has started to evolve to become more power efficient, powerful in terms of processors and smaller in size. This evolution of electronics has led microprocessors and other components to be merged to form a circuit called System-on-Chip. If we are to make a vast and cursory comparison between SoC and microcontrollers, microprocessors, and CPUs; we would come to the conclusion of SoCs being a single chip, doing all the things the other components can do yet without needing any external parts. So SoCs are computers just by themselves. Furthermore, SoCs have more memory than microcontrollers in general. Being a computer just by themselves allows them also to become servers. Nowadays, an SoC may be regarded also as a Server-on-Chi

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin

    The future of computing beyond Moore's Law.

    Get PDF
    Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'

    Use of Field Programmable Gate Array Technology in Future Space Avionics

    Get PDF
    Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA

    An Open Core System-on-chip Platform

    Get PDF
    The design cycle required to produce a System-on-Chip can be reduced by providing pre-designed built-in features and functions such as configurable I/O, power and ground grids, block RAMs, timing generators and other embedded intellectual property (IP) blocks. A basic combination of such built-in features is known as a platform. The major objective of this thesis was to design and implement one such System-on-Chip platform using open IP cores targeting the TSMC-0.18 CMOS process. The integrated System-on-Chip platform, which contains approximately four million transistors, was synthesized using Synopsys - Design Compiler and placed and routed using Cadence - First Encounter, Silicon Ensemble. Design verification was done at the pre-synthesis, post-synthesis and post-layout levels using Mentor Graphics - ModelSim. Final layout was imported into Cadence - Virtuoso to perform design rule check. A tutorial was written to enable others to create derivative designs of this platform quickly
    corecore