1,044 research outputs found
Compiling Path Expressions into VLSI Circuits
Path expressions were originally proposed by Campbell and Haberman [2] as a mechanism for process synchronization at the monitor level in software.. Not unexpectedly, they also provided notation for specifying the behavior of asynchronous circuits. Motivated by these potential applications, we investigate how to directly translate path expressions into hardware. Our implementation is complicated in the case of multiple path expressions by the need for synchronization on event names that are common to more than one path. However, since events are inherently asynchronous in our model, all of our circuits must be self-timed. Nevertheless, the circuits produced by our construction have area proportional to N*log(N) where N is the total length of the multiple path expression under consideration. This bound holds regardless of the number of individual paths or the degree of synchronization between paths. Furthermore, if the structure of the path expression allows partitioning. the circuit can be layed out in a distributed fashion without additional area overhead
Recommended from our members
Miss Manners: A Specialized Silicon Compiler for Synchronizers
Miss Manners is a synchronizer generator that will produce the layout of a synchronizer given a high-level description. A synchronizer generator is a type of specialized silicon compiler. Synchronizer generators can greatly aid the design of systems that are structured as loosely-coupled networks of autonomous subsystems. Chips that are structured in this way have reduced communication requirements and greater tolerance for transient failures. We describe a language for specifying synchronization requirements and a compiler for translating the language into circuits that enforce the specifications
A 100-MIPS GaAs asynchronous microprocessor
The authors describe how they ported an asynchronous microprocessor previously implemented in CMOS to gallium arsenide, using a technology-independent asynchronous design technique. They introduce new circuits including a sense-amplifier, a completion detection circuit, and a general circuit structure for operators specified by production rules. The authors used and tested these circuits in a variety of designs
Optimizing construction of scheduled data flow graph for on-line testability
The objective of this work is to develop a new methodology for behavioural synthesis using a flow of synthesis, better suited to the scheduling of independent calculations and non-concurrent online testing. The traditional behavioural synthesis process can be defined as the compilation of an algorithmic specification into an architecture composed of a data path and a controller. This stream of synthesis generally involves scheduling, resource allocation, generation of the data path and controller synthesis. Experiments showed that optimization started at the high level synthesis improves the performance of the result, yet the current tools do not offer synthesis optimizations that from the RTL level. This justifies the development of an optimization methodology which takes effect from the behavioural specification and accompanying the synthesis process in its various stages. In this paper we propose the use of algebraic properties (commutativity, associativity and distributivity) to transform readable mathematical formulas of algorithmic specifications into mathematical formulas evaluated efficiently. This will effectively reduce the execution time of scheduling calculations and increase the possibilities of testability
Recommended from our members
Psi: A Silicon Compiler for Very Fast Protocol Processing
Conventional protocols implementations typically fall short, by a few orders of magnitude, of supporting the speeds afforded by high-speed optical transmission media. This protocol processing bottleneck is a key hurdle in taking advantage of the opportunities presented by high-speed communications. This paper describes PSi, a silicon compiler that transforms formal protocol specifications into efficient VLSI implementations. PSi takes advantage of the parallelisms intrinsic to a given protocol to accomplish very high-speed implementations. Initial application of PSi to the IEEE 802.2 (logical link control) leads to processing rates in the order of 106 packets per second (p/s). The 802.2 was selected as a benchmark of complexity; light-weight protocols can accomplish even higher processing rates, reaching the limits set by chip clock rates (i.e., a packet per cycle). These speeds significantly exceed typical of software implementations (up to a few hundred p/s) or special hardware-assisted implementations (up to a few thousands p/s). More importantly, at these rates when the packet size is 103-4 bits the protocol throughput of 109-10 bits/sec reaches the limiting throughput afforded by memory technology. Thus, the protocol processing bottleneck is pushed to the ultimate bounds set by VLSI technologies
Recommended from our members
Avoiding Latch Formation in Regular Expression Recognizers
Specialized silicon compilers, or module generators, are promising tools for automating the design of custom VLSI chips. In particular, generators for regular language recognizers have many applications. A problem called latch formation that causes regular expression recognizers to be more complex than they would first appear is identified. if recognizers are constructed in the most straightforward way from certain regular expressions, they may contain extraneous latches that cause incorrect operation. After identifying the problem, the article presents a source-to-source transformation that converts regular expressions that cause latch formation into expressions that do not. This transformation allows regular expression recognizers to be simpler, smaller, and faster, thus adding to the advantages of specialized silicon compilers
An embedded language framework for hardware compilation
Various languages have been proposed to describe synchronous hardware at an abstract, yet synthesisable level. We propose a uniform framework within which such languages can be developed, and combined together for simulation, synthesis, and verification. We do this by embedding the languages in Lava — a hardware description language (HDL), itself embedded in the functional programming language Haskell. The approach allows us to easily experiment with new formal languages and language features, and also provides easy access to formal verification tools aiding program verification.peer-reviewe
Synchronous Digital Circuits as Functional Programs
Functional programming techniques have been used to describe synchronous digital circuits since the early 1980s and have proven successful at describing certain types of designs. Here we survey the systems and formal underpinnings that constitute this tradition. We situate these techniques with respect to other formal methods for hardware design and discuss the work yet to be done
Recommended from our members
Concurrent Algebras for VLSI Design
As the size and complexity of VLSI chips increases, designers are beginning to rely more and more on automated chip design systems to help layout, route, or even design circuits. silicon compilers convert the functional description of a system to a mask level design of a chip that implements the system. In order to ease the task of describing the system, and to help analyse and verify its working, the description languages are based on algebraic systems. A typical circuit has a number of actions occurring at any given time. So we use concurrent algebras as the basis for the description languages. In this paper, we survey algebras that enable the description and analysis of concurrent systems. We examine them particularly from the point of view of using them to implement systems in VLSI. We therefore concentrate on the basics of each algebra, and omit features that are not readily implementable, such as recursion. We will look at four algebras: trace theory, path expressions, Milner's calculus of communicating systems (CCS), and an algebra of finite events (CAFE). We choose the first three since each has been used in some form of silicon compiler or other automated hardware design s)"Item, and together they demonstrate all the features found in higher level description systems for hardware. The fourth is an algebra that we are developing to address the problems of describing systems of events of finite duration. In chapter 2 we introduce an informal net notation and the concept of observers, which we use in the next four chapters to describe each algebra briefly. In chapter 7, we compare the algebras in terms of their treatment of independence, the type of parallel composition they use, and the inter-event dependencies they allow. We end by explaining the relative advantages and disadvantages of the algebras in various situations. The goal hoped that this comparative discussion of the algebras is to aid in the design of process description languages to be used in silicon compilers
- …