11 research outputs found

    Verification of SD/MMC Controller IP Using UVM

    Get PDF
    Wide spread IP reuse in SoC Designs has enabled meteoric development of derivative designs. Several hardware block IPs are integrated together to reduce production costs, time-to-fab/timeto- market and achieve higher levels of productivity. These block IPs must be verified independently before shipping to ensure proper working and conformance to protocols that they are implementing. But, since the application of these IPs will vary from SoC to SoC, the verification environment must consider the important features and functions that are critical for that application. This may mean, revamping the entire testbench to verify the application critical features. Verification takes a major chunk of the total time of the manufacturing cycle. Thus, Verification IPs are created that can be re-used by making minor modifications to the existing test bench. In this project, an Open Cores IP – “SD/MMC Card Controller” (written in Verilog) is re-used by adding an interrupt line and card-detect feature and is verified using Universal Verification Methodology (UVM). The SD/MMC Card Controller has Wishbone as the Host Controller and SPI Master as the Core Controller. The test environment is layered and can be reused. This means, if this IP is re-designed to be controlled by another Host Controller (AXI for example), the verification environment can be re-used by inserting the BFM of that host controller. This paper discusses SD/MMC, Wishbone bus and SPI protocols, along with SD/MMC Controller and UVM based test-bench architecture

    Design and Verification of a DFI-AXI DDR4 Memory PHY Bridge Suitable for FPGA Based RTL Emulation and Prototyping

    Get PDF
    System on chip (SoC) designers today are emphasizing on a process which can ensure robust silicon at the first tape-out. Given the complexity of modern SoC chips, there is compelling need to have suitable run time software, such at the Linux kernel and necessary drivers available once prototype silicon is available. Emulation and FPGA prototyping systems are exemplary platforms to run the tests for designs, are naturally efficient and perform well, and enable early software development. While useful, one needs to keep in mind that emulation and FPGA prototyping systems do not run at full silicon speed. In fact, the SoC target ported to the FPGA might achieve a clock speed less than 10 MHz. While still very useful for testing and software development, this low operating speed creates challenges for connecting to external devices such as DDR SDRAM. In this paper, the DDR-PHY INTERFACE (DFI) to Advanced eXtensible Interface (AXI) Bridge is designed to support a DDR4 memory sub-system design. This bridge module is developed based on the DDR PHY Interface version 5.0 specification, and once implemented in an FPGA, it transfers command information and data between the SoC DDR Memory controller being prototypes, across the AXI bus to an FPGA specific memory controller connected to a DDR SDRAM or other physical memory external to the FPGA. This bridge module enables multi-communication with the design under test (DUT) with a synthesizable SCE-MI based infrastructure between the bridge and logic simulator. SCE-MI provides a direct mechanism to inject the specific traffic, and monitor performance of the DFI-AXI DDR4 Memory PHY Bridge. Both Emulation and FPGA prototyping platforms can use this design and its testbench

    A Hardware Verification Methodology for an Interconnection Network with fast Process Synchronization

    Full text link
    Shrinking process node sizes allow the integration of more and more functionality into a single chip design. At the same time, the mask costs to manufacture a new chip increases steadily. For the industry this cost increase can be absorbed by selling more chips. Furthermore, new innovative chip designs have a higher risk. Therefore, the industry only changes small parts of a chip design between different generations to minimize their risks. Thus, new innovative chip designs can only be realized by research institutes, which do not have the cost restrictions and the pressure from the markets as the industry. Such an innovative research project is EXTOLL, which is developed by the Computer Architecture Group of the University of Heidelberg. It is a new interconnection network for High performance Computing, and targets the problems of existing interconnection networks commercially available. EXTOLL is optimized for a high bandwidth, a low latency, and a high message rate. Especially, the low latency and high message rate become more important for modern interconnection networks. As the size of networks grow, the same computational problem is distributed to more nodes. This leads to a lower data granularity and more smaller messages, that have to be transported by the interconnection network. The problem of smaller messages in the interconnection network is addressed by this thesis. It develops a new network protocol, which is optimized for small messages. It reduces the protocol overhead required for sending small messages. Furthermore, the growing network sizes introduce a reliability problem. This is also addressed by the developed efficient network protocol. The smaller data granularity also increases the need for an efficient barrier synchronization. Such a hardware barrier synchronization is developed by thesis, using a new approach of integrating the barrier functionality into the interconnection network. The masks costs to manufacture an ASIC make it difficult for a research institute to build an ASIC. A research institute cannot afford re-spin, because of the costs. Therefore, there is the pressure to make it right the first time. An approach to avoid a re-spin is the functional verification in prior to the submission. A complete and comprehensive verification methodology is developed for the EXTOLL interconnection network. Due to the structured approach, it is possible to realize the functional verification with limited resources in a small time frame. Additionally, the developed verification methodology is able to support different target technologies for the design with a very little overhead

    Methodology and Ecosystem for the Design of a Complex Network ASIC

    Full text link
    Performance of HPC systems has risen steadily. While the 10 Petaflop/s barrier has been breached in the year 2011 the next large step into the exascale era is expected sometime between the years 2018 and 2020. The EXTOLL project will be an integral part in this venture. Originally designed as a research project on FPGA basis it will make the transition to an ASIC to improve its already excelling performance even further. This transition poses many challenges that will be presented in this thesis. Nowadays, it is not enough to look only at single components in a system. EXTOLL is part of complex ecosystem which must be optimized overall since everything is tightly interwoven and disregarding some aspects can cause the whole system either to work with limited performance or even to fail. This thesis examines four different aspects in the design hierarchy and proposes efficient solutions or improvements for each of them. At first it takes a look at the design implementation and the differences between FPGA and ASIC design. It introduces a methodology to equip all on-chip memory with ECC logic automatically without the user’s input and in a transparent way so that the underlying code that uses the memory does not have to be changed. In the next step the floorplanning process is analyzed and an iterative solution is worked out based on physical and logical constraints of the EXTOLL design. Besides, a work flow for collaborative design is presented that allows multiple users to work on the design concurrently. The third part concentrates on the high-speed signal path from the chip to the connector and how it is affected by technological limitations. All constraints are analyzed and a package layout for the EXTOLL chip is proposed that is seen as the optimal solution. The last part develops a cost model for wafer and package level test and raises technological concerns that will affect the testing methodology. In order to run testing internally it proposes the development of a stand-alone test platform that is able to test packaged EXTOLL chips in every aspect

    Analysis and optimization of a debug post-silicon hardware architecture

    Get PDF
    The goal of this thesis is to analyze the post-silicon validation hardware infrastructure implemented on multicore systems taking as an example Esperanto Technologies SoC, which has thousands of RISC-V processors and targets specific software applications. Then, based on the conclusions of the analysis, the project proposes a new post-silicon debug architecture that can fit on any System on-Chip without depending on its target application or complexity and that optimizes the options available on the market for multicore systems

    CIRA annual report FY 2015/2016

    Get PDF
    Reporting period April 1, 2015-March 31, 2016

    Proceedings of the 6th Annual Summer Conference: NASA/USRA University Advanced Design Program

    Get PDF
    The NASA/USRA University Advanced Design Program is a unique program that brings together NASA engineers, students, and faculty from United States engineering schools by integrating current and future NASA space/aeronautics engineering design projects into the university curriculum. The Program was conceived in the fall of 1984 as a pilot project to foster engineering design education in the universities and to supplement NASA's in-house efforts in advanced planning for space and aeronautics design. Nine universities and five NASA centers participated in the first year of the pilot project. The study topics cover a broad range of potential space and aeronautics projects that could be undertaken during a 20 to 30 year period beginning with the deployment of the Space Station Freedom scheduled for the mid-1990s. Both manned and unmanned endeavors are embraced, and the systems approach to the design problem is emphasized

    Microgravity Science and Applications: Program Tasks and Bibliography for Fiscal Year 1996

    Get PDF
    NASA's Microgravity Science and Applications Division (MSAD) sponsors a program that expands the use of space as a laboratory for the study of important physical, chemical, and biochemical processes. The primary objective of the program is to broaden the value and capabilities of human presence in space by exploiting the unique characteristics of the space environment for research. However, since flight opportunities are rare and flight research development is expensive, a vigorous ground-based research program, from which only the best experiments evolve, is critical to the continuing strength of the program. The microgravity environment affords unique characteristics that allow the investigation of phenomena and processes that are difficult or impossible to study an Earth. The ability to control gravitational effects such as buoyancy driven convection, sedimentation, and hydrostatic pressures make it possible to isolate phenomena and make measurements that have significantly greater accuracy than can be achieved in normal gravity. Space flight gives scientists the opportunity to study the fundamental states of physical matter-solids, liquids and gasses-and the forces that affect those states. Because the orbital environment allows the treatment of gravity as a variable, research in microgravity leads to a greater fundamental understanding of the influence of gravity on the world around us. With appropriate emphasis, the results of space experiments lead to both knowledge and technological advances that have direct applications on Earth. Microgravity research also provides the practical knowledge essential to the development of future space systems. The Office of Life and Microgravity Sciences and Applications (OLMSA) is responsible for planning and executing research stimulated by the Agency's broad scientific goals. OLMSA's Microgravity Science and Applications Division (MSAD) is responsible for guiding and focusing a comprehensive program, and currently manages its research and development tasks through five major scientific areas: biotechnology, combustion science, fluid physics, fundamental physics, and materials science. FY 1996 was an important year for MSAD. NASA continued to build a solid research community for the coming space station era. During FY 1996, the NASA Microgravity Research Program continued investigations selected from the 1994 combustion science, fluid physics, and materials science NRAS. MSAD also released a NASA Research Announcement in microgravity biotechnology, with more than 130 proposals received in response. Selection of research for funding is expected in early 1997. The principal investigators chosen from these NRAs will form the core of the MSAD research program at the beginning of the space station era. The third United States Microgravity Payload (USMP-3) and the Life and Microgravity Spacelab (LMS) missions yielded a wealth of microgravity data in FY 1996. The USMP-3 mission included a fluids facility and three solidification furnaces, each designed to examine a different type of crystal growth

    Programming the cerebellum

    Get PDF
    It is argued that large-scale neural network simulations of cerebellar cortex and nuclei, based on realistic compartmental models of me major cell populations, are necessary before the problem of motor learning in the cerebellum can be solved, [HOUK et al.; SIMPSON et al.

    Bowdoin Orient v.132, no.1-24 (2000-2001)

    Get PDF
    https://digitalcommons.bowdoin.edu/bowdoinorient-2000s/1001/thumbnail.jp
    corecore