2,979 research outputs found
Asynchronous techniques for system-on-chip design
SoC design will require asynchronous techniques as the large parameter variations across the chip will make it impossible to control delays in clock networks and other global signals efficiently. Initially, SoCs will be globally asynchronous and locally synchronous (GALS). But the complexity of the numerous asynchronous/synchronous interfaces required in a GALS will eventually lead to entirely asynchronous solutions. This paper introduces the main design principles, methods, and building blocks for asynchronous VLSI systems, with an emphasis on communication and synchronization. Asynchronous circuits with the only delay assumption of isochronic forks are called quasi-delay-insensitive (QDI). QDI is used in the paper as the basis for asynchronous logic. The paper discusses asynchronous handshake protocols for communication and the notion of validity/neutrality tests, and completion tree. Basic building blocks for sequencing, storage, function evaluation, and buses are described, and two alternative methods for the implementation of an arbitrary computation are explained. Issues of arbitration, and synchronization play an important role in complex distributed systems and especially in GALS. The two main asynchronous/synchronous interfaces needed in GALS-one based on synchronizer, the other on stoppable clock-are described and analyzed
Quick Start Guide to Verilog
The classical digital design approach (i.e., manual synthesis and minimization of logic) quickly
becomes impractical as systems become more complex. This is the motivation for the modern digital
design flow, which uses hardware description languages (HDL) and computer-aided synthesis/minimization to create the final circuitry. The purpose of this book is to provide a quick start guide to the Verilog
language, which is one of the two most common languages used to describe logic in the modern digital
design flow. This book is intended for anyone that has already learned the classical digital design
approach and is ready to begin learning HDL-based design. This book is also suitable for practicing
engineers that already know Verilog and need quick reference for syntax and examples of common
circuits. This book assumes that the reader already understands digital logic (i.e., binary numbers,
combinational and sequential logic design, finite state machines, memory, and binary arithmetic basics).
Since this book is designed to accommodate a designer that is new to Verilog, the language is
presented in a manner that builds foundational knowledge first before moving into more complex topics.
As such, Chaps. 1–6 provide a comprehensive explanation of the basic functionality in Verilog to model
combinational and sequential logic. Chapters 7–11 focus on examples of common digital systems such
as finite state machines, memory, arithmetic, and computers. For a reader that is using the book as a
reference guide, it may be more practical to pull examples from Chaps. 7–11 as they use the full
functionality of the language as it is assumed the reader has gained an understanding of it in
Chaps. 1–6. For a Verilog novice, understanding the history and fundamentals of the language will
help form a comprehensive understanding of the language; thus it is recommended that the early
chapters are covered in the sequence they are written
Quick Start Guide to VHDL
The purpose of a hardware description languages is to describe digital circuitry using a text-based language. HDLs provide a means to describe large digital systems without the need for schematics, which can become impractical in very large designs. HDLs have evolved to support logic simulation at different levels of abstraction
Introduction to Logic Circuits & Logic Design with Verilog
The overall goal of this book is to fill a void that has appeared in the instruction of digital circuits over
the past decade due to the rapid abstraction of system design. Up until the mid-1980s, digital circuits
were designed using classical techniques. Classical techniques relied heavily on manual design
practices for the synthesis, minimization, and interfacing of digital systems. Corresponding to this design
style, academic textbooks were developed that taught classical digital design techniques. Around 1990,
large-scale digital systems began being designed using hardware description languages (HDL) and
automated synthesis tools. Broad-scale adoption of this modern design approach spread through the
industry during this decade. Around 2000, hardware description languages and the modern digital
design approach began to be taught in universities, mainly at the senior and graduate level. There
were a variety of reasons that the modern digital design approach did not penetrate the lower levels of
academia during this time. First, the design and simulation tools were difficult to use and overwhelmed
freshman and sophomore students. Second, the ability to implement the designs in a laboratory setting
was infeasible. The modern design tools at the time were targeted at custom integrated circuits, which
are cost- and time-prohibitive to implement in a university setting. Between 2000 and 2005, rapid
advances in programmable logic and design tools allowed the modern digital design approach to be
implemented in a university setting, even in lower-level courses. This allowed students to learn the
modern design approach based on HDLs and prototype their designs in real hardware, mainly fieldprogrammable gate arrays (FPGAs). This spurred an abundance of textbooks to be authored, teaching
hardware description languages and higher levels of design abstraction. This trend has continued until
today. While abstraction is a critical tool for engineering design, the rapid movement toward teaching only
the modern digital design techniques has left a void for freshman- and sophomore-level courses in digital
circuitry. Legacy textbooks that teach the classical design approach are outdated and do not contain
sufficient coverage of HDLs to prepare the students for follow-on classes. Newer textbooks that teach
the modern digital design approach move immediately into high-level behavioral modeling with minimal
or no coverage of the underlying hardware used to implement the systems. As a result, students are not
being provided the resources to understand the fundamental hardware theory that lies beneath the
modern abstraction such as interfacing, gate-level implementation, and technology optimization.
Students moving too rapidly into high levels of abstraction have little understanding of what is going
on when they click the “compile and synthesize” button of their design tool. This leads to graduates who
can model a breadth of different systems in an HDL but have no depth into how the system is
implemented in hardware. This becomes problematic when an issue arises in a real design and there
is no foundational knowledge for the students to fall back on in order to debug the problem
Introduction to Logic Circuits & Logic Design with VHDL
The overall goal of this book is to fill a void that has appeared in the instruction of digital circuits over
the past decade due to the rapid abstraction of system design. Up until the mid-1980s, digital circuits
were designed using classical techniques. Classical techniques relied heavily on manual design
practices for the synthesis, minimization, and interfacing of digital systems. Corresponding to this design
style, academic textbooks were developed that taught classical digital design techniques. Around 1990,
large-scale digital systems began being designed using hardware description languages (HDL) and
automated synthesis tools. Broad-scale adoption of this modern design approach spread through the
industry during this decade. Around 2000, hardware description languages and the modern digital
design approach began to be taught in universities, mainly at the senior and graduate level. There
were a variety of reasons that the modern digital design approach did not penetrate the lower levels of
academia during this time. First, the design and simulation tools were difficult to use and overwhelmed
freshman and sophomore students. Second, the ability to implement the designs in a laboratory setting
was infeasible. The modern design tools at the time were targeted at custom integrated circuits, which
are cost- and time-prohibitive to implement in a university setting. Between 2000 and 2005, rapid
advances in programmable logic and design tools allowed the modern digital design approach to be
implemented in a university setting, even in lower-level courses. This allowed students to learn the
modern design approach based on HDLs and prototype their designs in real hardware, mainly field
programmable gate arrays (FPGAs). This spurred an abundance of textbooks to be authored teaching
hardware description languages and higher levels of design abstraction. This trend has continued until
today. While abstraction is a critical tool for engineering design, the rapid movement toward teaching only
the modern digital design techniques has left a void for freshman- and sophomore-level courses in digital
circuitry. Legacy textbooks that teach the classical design approach are outdated and do not contain
sufficient coverage of HDLs to prepare the students for follow-on classes. Newer textbooks that teach
the modern digital design approach move immediately into high-level behavioral modeling with minimal
or no coverage of the underlying hardware used to implement the systems. As a result, students are not
being provided the resources to understand the fundamental hardware theory that lies beneath the
modern abstraction such as interfacing, gate-level implementation, and technology optimization.
Students moving too rapidly into high levels of abstraction have little understanding of what is going
on when they click the “compile and synthesize” button of their design tool. This leads to graduates who
can model a breadth of different systems in an HDL but have no depth into how the system is
implemented in hardware. This becomes problematic when an issue arises in a real design and there
is no foundational knowledge for the students to fall back on in order to debug the problem
Pipeline mode in C-based direct hardware implementation
In this paper a methodology is presented that enables the pipeline function of hardware blocks created by C-based direct hardware design. The method is embedded into the C-based design methodology worked out by the authors earlier. This pipeline enabling method is rather flexible, needs no special efforts. With the help of a simple state-machine-based entity, blocks of different execution times can build up the pipeline, even with data-dependent duration. A data-spreading technique solves data consistency. Pipeline sectioning - chosing the right and balanced granularity versus pipelining overhead - is an optimisation matter. Simulation results prove the correctness of the method
Developing Globally-Asynchronous Locally- Synchronous Systems through the IOPT-Flow Framework
Throughout the years, synchronous circuits have increased in size and com-plexity, consequently, distributing a global clock signal has become a laborious task. Globally-Asynchronous Locally-Synchronous (GALS) systems emerge as a possible solution; however, these new systems require new tools.
The DS-Pnet language formalism and the IOPT-Flow framework aim to support and accelerate the development of cyber-physical systems. To do so it offers a tool chain that comprises a graphical editor, a simulator and code gener-ation tools capable of generating C, JavaScript and VHDL code. However, DS-Pnets and IOPT-Flow are not yet tuned to handle GALS systems, allowing for partial specification, but not a complete one.
This dissertation proposes extensions to the DS-Pnet language and the IOPT-Flow framework in order to allow development of GALS systems. Addi-tionally, some asynchronous components were created, these form interfaces that allow synchronous blocks within a GALS system to communicate with each other
Design of the Protocol Processor for the ROBUS-2 Communication System
The ROBUS-2 Protocol Processor (RPP) is a custom-designed hardware component implementing the functionality of the ROBUS-2 fault-tolerant communication system. The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault tolerant integrated modular architecture currently under development at NASA Langley Research Center. ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, time reference (clock synchronization), and distributed diagnosis (group membership). ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 tolerates internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. ROBUS consists of RPPs connected to each other by a lower-level physical communication network. The RPP has a pipelined architecture and the design is parameterized in the behavioral and structural domains. The design of the RPP enables the bus to achieve a PE-message throughput that approaches the available bandwidth at the physical layer
A Scalable Parallel Architecture with FPGA-Based Network Processor for Scientific Computing
This thesis discuss the design and the implementation of an FPGA-Based
Network Processor for scientific computing, like Lattice Quantum ChromoDinamycs
(LQCD) and fluid-dynamics applications based on Lattice Boltzmann
Methods (LBM). State-of-the-art programs in this (and other similar)
applications have a large degree of available parallelism, that can be easily
exploited on massively parallel systems, provided the underlying communication
network has not only high-bandwidth but also low-latency.
I have designed in details, built and tested in hardware, firmware and
software an implementation of a Network Processor, tailored for the most
recent families of multi-core processors. The implementation has been developed
on an FPGA device to easily interface the logic of NWP with the CPU
I/O sub-system.
In this work I have assessed several ways to move data between the main
memory of the CPU and the I/O sub-system to exploit high data throughput
and low latency, enabling the use of “Programmed Input Output” (PIO),
“Direct Memory Access” (DMA) and “Write Combining” memory-settings.
On the software side, I developed and test a device driver for the Linux
operating system to access the NWP device, as well as a system library to
efficiently access the network device from user-applications.
This thesis demonstrates the feasibility of a network infrastructure that
saturates the maximum bandwidth of the I/O sub-systems available on recent
CPUs, and reduces communication latencies to values very close to those
needed by the processor to move data across the chip boundary
- …