8,477 research outputs found

    From process-oriented functional specifications to efficient asynchronous circuits

    Get PDF
    technical reportA methodology for high-level synthesis and performance optimization of asynchronous circuits is described. A specification language called hopCP which is based on a simple extension to classical flow graphs is introduced. The extension involves the addition of expression actions to a flow graph, to model computational aspects of hardware behavior in a purely functional framework. Control and Communication aspects are modeled explicitly just as in Hoare's CSP. A systematic methodology to synthesize asynchronous circuits from hopCP based on the notion of a self-timed block is presented. The compilation methodology based on self-timed blocks coupled with the functional flavor of hop CP gives us the ability to exploit several optimizations like quick return, intra-loop pipelining and speculative evaluation of conditional expressions. The specification language hopCP, the synthesis procedure and the optimizations are illustrated in design of an asynchronous iterative multiplier

    SuperSpike: Supervised learning in multi-layer spiking neural networks

    Full text link
    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multi-layer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns

    Area/latency optimized early output asynchronous full adders and relative-timed ripple carry adders

    Get PDF
    This article presents two area/latency optimized gate level asynchronous full adder designs which correspond to early output logic. The proposed full adders are constructed using the delay-insensitive dual-rail code and adhere to the four-phase return-to-zero handshaking. For an asynchronous ripple carry adder (RCA) constructed using the proposed early output full adders, the relative-timing assumption becomes necessary and the inherent advantages of the relative-timed RCA are: (1) computation with valid inputs, i.e., forward latency is data-dependent, and (2) computation with spacer inputs involves a bare minimum constant reverse latency of just one full adder delay, thus resulting in the optimal cycle time. With respect to different 32-bit RCA implementations, and in comparison with the optimized strong-indication, weak-indication, and early output full adder designs, one of the proposed early output full adders achieves respective reductions in latency by 67.8, 12.3 and 6.1 %, while the other proposed early output full adder achieves corresponding reductions in area by 32.6, 24.6 and 6.9 %, with practically no power penalty. Further, the proposed early output full adders based asynchronous RCAs enable minimum reductions in cycle time by 83.4, 15, and 8.8 % when considering carry-propagation over the entire RCA width of 32-bits, and maximum reductions in cycle time by 97.5, 27.4, and 22.4 % for the consideration of a typical carry chain length of 4 full adder stages, when compared to the least of the cycle time estimates of various strong-indication, weak-indication, and early output asynchronous RCAs of similar size. All the asynchronous full adders and RCAs were realized using standard cells in a semi-custom design fashion based on a 32/28 nm CMOS process technology

    Practical advances in asynchronous design and in asynchronous/synchronous interfaces

    Get PDF
    Journal ArticleAsynchronous systems are being viewed as an increasingly viable alternative to purely synchronous systems. This paper gives an overview of the current state of the art in practical asynchronous circuit and system design in four areas: controllers, datapaths, processors, and the design of asynchronous/synchronous interfaces

    Performance analysis and optimization of asynchronous circuits

    Get PDF
    Journal ArticleAsynchronous/Self-timed circuits are beginning to attract renewed attention as promising means of dealing with the complexity of modern VLSI designs. However, there are very few analysis techniques or tools available for estimating the performance of asynchronous circuits. In this paper we adapt the theory of Generalized Timed Petri-nets (GTPN) for analyzing and comparing a wide variety of asynchronous circuits, ranging from purely control-oriented circuits such as cross-bar arbiters to large asynchronous systems with data dependent control such as asynchronous processors. Experiments with the GTPN analyzer are found to track the observed performance of actual asynchronous circuits, thereby offering empirical evidence towards the soundness of the modeling approach. Our main contribution is in demonstrating how a quantitative design methodology for asynchronous circuits can be developed based on Timed Petri-nets

    On digit-recurrence division algorithms for self-timed circuits

    Get PDF
    The optimization of algorithms for self-timed or asynchronous circuits requires specific solutions. Due to the variable-time capabilities of asynchronous circuits, the average computation time should be optimized and not only the worst case of the signal propagation. If efficient algorithms and implementations are known for asynchronous addition and multiplication, only straightforward algorithms have been studied for division. This paper compares several digit-recurrence division algorithms (speed, area and circuit activity for estimating the power consumption). The comparison is based on simulations of the different operators described at the gate level. This work shows that the best solutions for asynchronous circuits are quite different from those used in synchronous circuits

    Self-Time Circuit Size Optimization For An Input Data Distribution

    Get PDF
    The analysis is based on the Logical Effort (LE). The LE model used in this work was extracted from SPICE simulation for the TMSC 0.18um process. The performance and energy dissipation of circuits implemented with this approach is 13% and 16% respectively better than circuits designed with previously proposed approaches

    Practical advances in asynchronous design

    Get PDF
    Journal ArticleRecent practical advances in asynchronous circuit and system design have resulted in renewed interest by circuit designers. Asynchronous systems are being viewed as in increasingly viable alternative to globally synchronous system organization. This tutorial will present the current state of the art in asynchronous circuit and system design in three different areas. The first section details asynchronous control systems. The second describes a variety of approaches to asynchronous datapaths. The third section is on asynchronous and self-timed circuits applied to the design of general purpose processors
    • …
    corecore