2,247 research outputs found
A method for evaluating upper bound of simultaneous switching gates using circuit partition
Abstract: This paper presents a method for evaluating an upper bound of simultaneous switching gates in combinational circuits. In this method, the original circuit is partitioned into subcircuits, and the upper bound is approximately computed as the sum of maximum switching gates for all subcircuits. In order to increase the accuracy, we adopted an evaluation function that takes account of both the interconnections among subcircuits and the number of generated subcircuits. Experimental results for ISCAS circuits show that the method efficiently evaluates the upper bounds of switching gates. 1
Recommended from our members
FACTPLA: Functional analysis and the complexity of testing programmable logic array
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A computer aided method for analyzing the testability of Programmable Logic Arrays (PLAs) is described. The method, which is based on a functional verification approach, estimates the complexity of testing a PLA
according to the amount of single undetectable faults in the array structure.
An analytic program (FACTPLA) is developed to predict the above complexity without analyzing the topology of the array as such. Thus, the method is technology invariant
and depends only on the functionality of the PLA. The program quantitatively evaluates the effects of undetectable faults and produces some testability measures to manifest these effects. A testability profile for different PLA examples is provided and a number of suggestions for further research to establish definitely the usefulness of some functional properties for testing were made
Recommended from our members
On Co-Optimization Of Constrained Satisfiability Problems For Hardware Software Applications
Manufacturing technology has permitted an exponential growth in transistor count and density. However, making efficient use of the available transistors in the design has become exceedingly difficult. Standard design flow involves synthesis, verification, placement and routing followed by final tape out of the design. Due to the presence of various undesirable effects like capacitive crosstalk, supply noise, high temperatures, etc., verification/validation of the design has become a challenging problem. Therefore, having a good design convergence may not be possible within the target time, due to a need for a large number of design iterations.
Capacitive crosstalk is one of the major causes of design convergence problems in deep sub-micron era. With scaling, the number of crosstalk violations has been increasing because of reduced inter-wire distances. Consequently only the most severe crosstalk faults are fixed pre-silicon while the rest are tested post-silicon. Testing for capacitive crosstalk involves generation of input patterns which can be applied post-silicon to the integrated circuit and comparison of the output response. These patterns are generated at the gate/ Register Transfer Level (RTL) of abstraction using Automatic Test Pattern Generation (ATPG) tools. In this dissertation, anInteger Linear Programming (ILP) based ATPG technique for maximizing crosstalk induced delay increase at the victim net, for multiple aggressor crosstalk faults, is presented. Moreover, various solutions for pattern generation considering both zero as well as unit delay models is also proposed.
With voltage scaling, power supply switching noise has become one of the leading causes of signal integrity related failures in deep sub-micron designs. Hence, during power supply network design and analysis of power supply switching noise, computation of peak supply current is an essential step. Traditional peak current estimation approaches involve addition of peak current associated with all the CMOS gates which are switching in a combinational circuit. Consequently, this approach does not take the Boolean and temporal relationships of the circuit into account. This work presents an ILP based technique for generation of an input pattern pair which maximizes switching supply currents for a combinational circuit in the presence of integer gate delays. The input pattern pair generated using the above approach can be applied post-silicon for power droop testing.
With high level of integration, Multi-Processor Systems on Chip (MPSoC) feature multiple processor cores and accelerators on the same die, so as to exploit the instruction level parallelism in the application. For hardware-software co-design, application programming model is based on a Task Graph, which represents task dependencies and execution/transfer times for various threads and processes within an application. Mapping an application to an MPSoC traditionally involves representing it in the form of a task graph and employing static scheduling in order to minimize the schedule length. Dynamic system behavior is not taken into consideration during static scheduling, while dynamic scheduling requires the knowledge of task graph at runtime. A run-time task graph extraction heuristic to facilitate dynamic scheduling is also presented here. A novel game theory based approach uses this extracted task graph to perform run-time scheduling in order to minimize total schedule length.
With increase in transistor density, power density has gone up substantially. This has lead to generation of regions with very high temperature called Hotspots. Hotspots lead to reliability and performance issues and affect design convergence. In current generation Integrated Circuits (ICs) temperature is controlled by reducing power dissipation using Dynamic Thermal Management (DTM) techniques like frequency and/or voltage scaling. These techniques are reactive in nature and have detrimental effects on performance. Here, a look-ahead based task migration technique is proposed, in order to utilize the multitude of cores available in an MPSoC to eliminate thermal emergencies. Our technique is based on temperature prediction, leveraging upon a novel wavelet based thermal modeling approach.
Hence, this work addresses several optimization problems that can be reduced to constrained max-satisfiability, involving integer as well as Boolean constraints in hardware and software domains. Moreover, it provides domain specific heuristic solutions for each of them
Synthesis and Optimization of Reversible Circuits - A Survey
Reversible logic circuits have been historically motivated by theoretical
research in low-power electronics as well as practical improvement of
bit-manipulation transforms in cryptography and computer graphics. Recently,
reversible circuits have attracted interest as components of quantum
algorithms, as well as in photonic and nano-computing technologies where some
switching devices offer no signal gain. Research in generating reversible logic
distinguishes between circuit synthesis, post-synthesis optimization, and
technology mapping. In this survey, we review algorithmic paradigms ---
search-based, cycle-based, transformation-based, and BDD-based --- as well as
specific algorithms for reversible synthesis, both exact and heuristic. We
conclude the survey by outlining key open challenges in synthesis of reversible
and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table
Algorithms for Power Aware Testing of Nanometer Digital ICs
At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits
has become mandatory to catch small delay defects. Now, due to continuous shrinking
of complementary metal oxide semiconductor (CMOS) transistor feature size, power
density grows geometrically with technology scaling. Additionally, power dissipation
inside a digital circuit during the testing phase (for test vectors under all fault models
(Potluri, 2015)) is several times higher than its power dissipation during the normal
functional phase of operation. Due to this, the currents that flow in the power grid during
the testing phase, are much higher than what the power grid is designed for (the
functional phase of operation). As a result, during at-speed testing, the supply grid
experiences unacceptable supply IR-drop, ultimately leading to delay failures during
at-speed testing. Since these failures are specific to testing and do not occur during
functional phase of operation of the chip, these failures are usually referred to false
failures, and they reduce the yield of the chip, which is undesirable.
In nanometer regime, process parameter variations has become a major problem.
Due to the variation in signalling delays caused by these variations, it is important to
perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey
and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak
power dissipation causing false failures, that was addressed previously in the context of
at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also
becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al.,
1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012;
Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during
at-speed testing can be kept under control by minimizing switching activity during
testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past
for reduction of peak switching activity during at-speed testing of transition/delay faults
ii
in both combinational and sequential circuits. As far as at-speed testing of stuck faults
are concerned, while there were some techniques proposed in the past for combinational
circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning
the same for sequential circuits. This thesis addresses this open problem. We
propose algorithms for minimization of peak switching activity during at-speed testing
of stuck faults in sequential digital circuits under the combinational state preservation
scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that,
under this CSP-scan architecture, when the test set is completely specified, the peak
switching activity during testing can be minimized by solving the Bottleneck Traveling
Salesman Problem (BTSP). This mapping of peak test switching activity minimization
problem to BTSP is novel, and proposed for the first time in the literature.
Usually, as circuit size increases, the percentage of don’t cares in the test set increases.
As a result, test vector ordering for any arbitrary filling of don’t care bits
is insufficient for producing effective reduction in switching activity during testing of
large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care
filling plays a crucial role in reducing switching activity during testing. Taking this
into consideration, we propose an algorithm, XStat, which is capable of performing test
vector ordering while preserving don’t care bits in the test vectors, following which, the
don’t cares are filled in an intelligent fashion for minimizing input switching activity,
which effectively minimizes switching activity inside the circuit (Girard et al., 1998).
Through empirical validation on benchmark circuits, we show that XStat minimizes
peak switching activity significantly, during testing.
Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity,
it will not guarantee optimality. To address this issue, we propose an algorithm
that uses Dynamic Programming to calculate the lower bound for a given sequence
of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this
sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm,
which we refer to as DP-fill in this thesis, provides the globally optimal solution for
minimizing peak input-switching-activity and also is the best known in the literature
for minimizing peak input-switching-activity during testing. The proof of optimality of
DP-fill in minimizing peak input-switching-activity is also provided in this thesis
Fast and precise power prediction for combinatorial circuits considering glitching effects.
The power consumed by a combinational circuit is dictated by the switching activities of all signals associated with the circuit. Analytical approaches, named MCP and MCPG algorithms, are proposed for calculating signal activities for combinational circuits, and the later considers glitching effects. Both approaches are based on a Markov chain signal model, and directly account for correlations present among the signals. The accuracy of the approaches is verified by comparing signal activity values calculated using the proposed approaches with corresponding values produced through simulation studies. Another approach (called the MMCP algorithm) is also proposed to calculate the total transition activities including glitching, and can be more accurate than the proposed MCPG algorithm. It is also demonstrated that the proposed approaches are computationally efficient
Design Space Re-Engineering for Power Minimization in Modern Embedded Systems
Power minimization is a critical challenge for modern embedded system design. Recently, due to the rapid increase of system's complexity and
the power density, there is a growing need for power control techniques at various design levels. Meanwhile, due to technology scaling, leakage power has become a significant part of power dissipation in the CMOS circuits and new techniques are needed to reduce leakage power.
As a result, many new power minimization techniques have been proposed such as voltage island, gate sizing, multiple supply and threshold voltage, power gating and input vector control, etc. These design options further
enlarge the design space and make it prohibitively expensive to explore
for the most energy efficient design solution.
Consequently, heuristic algorithms and randomized
algorithms are frequently used to explore the design space, seeking sub-optimal
solutions to meet the time-to-market requirements. These algorithms are
based on the idea of truncating the design space
and restricting the search in a subset of the original design space. While this approach can effectively reduce the runtime of searching, it may also exclude high-quality design solutions and cause design quality degradation.
When the solution to one problem is used as the base for another problem, such solution quality degradation will accumulate. In modern electronics system design, when several such algorithms are used in series to solve problems in different design levels, the final solution can be far off the optimal one.
In my Ph.D. work, I develop a {\em re-engineering} methodology to
facilitate exploring the design space of power efficient embedded systems design.
The direct goal is to enhance the performance of existing low power
techniques. The methodology is based on the idea that design quality can be improved
via iterative ``re-shaping'' the design space based on the ``bad'' structure
in the obtained design solutions; the searching run-time can be reduced
by the guidance from previous exploration. This approach can be
described in three phases: (1) apply the existing techniques to obtain
a sub-optimal solution; (2) analyze the solution and expand the design
space accordingly; and (3) re-apply the technique to re-explore the
enlarged design space.
We apply this methodology at different levels of embedded system design to
minimize power: (i) switching power reduction in sequential logic synthesis;
(ii) gate-level static leakage current reduction; (iii) dual threshold voltage CMOS
circuits design; and (iv) system-level energy-efficient detection scheme for
wireless sensor networks. An extensive amount of experiments have been conducted
and the results have shown that this methodology can effectively enhance
the power efficiency of the existing embedded system design flows with very little
overhead
Layout optimization in ultra deep submicron VLSI design
As fabrication technology keeps advancing, many deep submicron (DSM) effects have become
increasingly evident and can no longer be ignored in Very Large Scale Integration
(VLSI) design. In this dissertation, we study several deep submicron problems (eg. coupling
capacitance, antenna effect and delay variation) and propose optimization techniques
to mitigate these DSM effects in the place-and-route stage of VLSI physical design.
The place-and-route stage of physical design can be further divided into several steps:
(1) Placement, (2) Global routing, (3) Layer assignment, (4) Track assignment, and (5) Detailed
routing. Among them, layer/track assignment assigns major trunks of wire segments
to specific layers/tracks in order to guide the underlying detailed router. In this dissertation,
we have proposed techniques to handle coupling capacitance at the layer/track assignment
stage, antenna effect at the layer assignment, and delay variation at the ECO (Engineering
Change Order) placement stage, respectively. More specifically, at layer assignment, we
have proposed an improved probabilistic model to quickly estimate the amount of coupling
capacitance for timing optimization. Antenna effects are also handled at layer assignment
through a linear-time tree partitioning algorithm. At the track assignment stage, timing is
further optimized using a graph based technique. In addition, we have proposed a novel
gate splitting methodology to reduce delay variation in the ECO placement considering
spatial correlations. Experimental results on benchmark circuits showed the effectiveness
of our approaches
Research in the effective implementation of guidance computers with large scale arrays Interim report
Functional logic character implementation in breadboard design of NASA modular compute
- …