1,854 research outputs found

    Timing Measurement Platform for Arbitrary Black-Box Circuits Based on Transition Probability

    No full text

    A Powerful Optimization Tool for Analog Integrated Circuits Design

    Get PDF
    This paper presents a new optimization tool for analog circuit design. Proposed tool is based on the robust version of the differential evolution optimization method. Corners of technology, temperature, voltage and current supplies are taken into account during the optimization. That ensures robust resulting circuits. Those circuits usually do not need any schematic change and are ready for the layout.. The newly developed tool is implemented directly to the Cadence design environment to achieve very short setup time of the optimization task. The design automation procedure was enhanced by optimization watchdog feature. It was created to control optimization progress and moreover to reduce the search space to produce better design in shorter time. The optimization algorithm presented in this paper was successfully tested on several design examples

    Modeling and Energy Optimization of LDPC Decoder Circuits with Timing Violations

    Full text link
    This paper proposes a "quasi-synchronous" design approach for signal processing circuits, in which timing violations are permitted, but without the need for a hardware compensation mechanism. The case of a low-density parity-check (LDPC) decoder is studied, and a method for accurately modeling the effect of timing violations at a high level of abstraction is presented. The error-correction performance of code ensembles is then evaluated using density evolution while taking into account the effect of timing faults. Following this, several quasi-synchronous LDPC decoder circuits based on the offset min-sum algorithm are optimized, providing a 23%-40% reduction in energy consumption or energy-delay product, while achieving the same performance and occupying the same area as conventional synchronous circuits.Comment: To appear in IEEE Transactions on Communication

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    Average-case optimized technology mapping of one-hot domino circuits*

    Get PDF
    Journal ArticleThis paper presents a technology mapping technique for optimizing the average-case delay of asynchronous combinational circuits implemented using domino logic and one-hot encoded outputs. The technique minimizes the critical path for common input patterns at the possible expense of making less common critical paths longer. To demonstrate the application of this technique, we present a case study of a combinational length decoding block, an integral component of an Asynchronous Instruction Length Decoder (AILD) which can be used in PentiumR processors. The experimental results demonstrate that the average-case delay of our mapped circuits can be dramatically lower than the worst-case delay of the circuits obtained using conventional worst-case mapping techniques

    Worst-Case Analysis of Electrical and Electronic Equipment via Affine Arithmetic

    Get PDF
    In the design and fabrication process of electronic equipment, there are many unkown parameters which significantly affect the product performance. Some uncertainties are due to manufacturing process fluctuations, while others due to the environment such as operating temperature, voltage, and various ambient aging stressors. It is desirable to consider these uncertainties to ensure product performance, improve yield, and reduce design cost. Since direct electromagnetic compatibility measurements impact on both cost and time-to-market, there has been a growing demand for the availability of tools enabling the simulation of electrical and electronic equipment with the inclusion of the effects of system uncertainties. In this framework, the assessment of device response is no longer regarded as deterministic but as a random process. It is traditionally analyzed using the Monte Carlo or other sampling-based methods. The drawback of the above methods is large number of required samples to converge, which are time-consuming for practical applications. As an alternative, the inherent worst-case approaches such as interval analysis directly provide an estimation of the true bounds of the responses. However, such approaches might provide unnecessarily strict margins, which are very unlikely to occur. A recent technique, affine arithmetic, advances the interval based methods by means of handling correlated intervals. However, it still leads to over-conservatism due to the inability of considering probability information. The objective of this thesis is to improve the accuracy of the affine arithmetic and broaden its application in frequency-domain analysis. We first extend the existing literature results to the efficient time-domain analysis of lumped circuits considering the uncertainties. Then we provide an extension of the basic affine arithmetic to the frequency-domain simulation of circuits. Classical tools for circuit analysis are used within a modified affine framework accounting for complex algebra and uncertainty interval partitioning for the accurate and efficient computation of the worst case bounds of the responses of both lumped and distributed circuits. The performance of the proposed approach is investigated through extensive simulations in several case studies. The simulation results are compared with the Monte Carlo method in terms of both simulation time and accuracy

    Synthesis and Optimization of Reversible Circuits - A Survey

    Full text link
    Reversible logic circuits have been historically motivated by theoretical research in low-power electronics as well as practical improvement of bit-manipulation transforms in cryptography and computer graphics. Recently, reversible circuits have attracted interest as components of quantum algorithms, as well as in photonic and nano-computing technologies where some switching devices offer no signal gain. Research in generating reversible logic distinguishes between circuit synthesis, post-synthesis optimization, and technology mapping. In this survey, we review algorithmic paradigms --- search-based, cycle-based, transformation-based, and BDD-based --- as well as specific algorithms for reversible synthesis, both exact and heuristic. We conclude the survey by outlining key open challenges in synthesis of reversible and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table

    A Risk-Managed Steady-State Analysis to Assess the Impact of Power Grid Uncertainties

    Full text link
    Electricity systems are experiencing increased effects of randomness and variability due to emerging stochastic assets. The increased effects introduce new uncertainties into power systems that can impact system operability and reliability. Existing steady-state methods for assessing system-level operability and reliability are primarily deterministic, therefore, ill-suited to capture randomness and variability. This work introduces a probabilistic steady-state analysis inspired by statistical worst-case circuit analysis to evaluate the risk of operational violations due to stochastic resources. Compared to parallelized Monte Carlo analyses (MCS), we have seen up to 24x improvement in runtime speed using our approach without significant loss of probabilistic accuracy for a Texas7k low-wind day test system.Comment: Submitted for publication to IEEE Transactions in Power Systems and pending revie

    Tuning for yield : towards predictable deep-submicron manufacturing

    Get PDF
    • 

    corecore