3,870 research outputs found
Yield-driven power-delay-optimal CMOS full-adder design complying with automotive product specifications of PVT variations and NBTI degradations
We present the detailed results of the application of mathematical optimization algorithms to transistor sizing in a full-adder cell design, to obtain the maximum expected fabrication yield. The approach takes into account all the fabrication process parameter variations specified in an industrial PDK, in addition to operating condition range and NBTI aging. The final design solutions present transistor sizing, which depart from intuitive transistor sizing criteria and show dramatic yield improvements, which have been verified by Monte Carlo SPICE analysis
EffiTest: Efficient Delay Test and Statistical Prediction for Configuring Post-silicon Tunable Buffers
At nanometer manufacturing technology nodes, process variations significantly
affect circuit performance. To combat them, post- silicon clock tuning buffers
can be deployed to balance timing bud- gets of critical paths for each
individual chip after manufacturing. The challenge of this method is that path
delays should be mea- sured for each chip to configure the tuning buffers
properly. Current methods for this delay measurement rely on path-wise
frequency stepping. This strategy, however, requires too much time from ex-
pensive testers. In this paper, we propose an efficient delay test framework
(EffiTest) to solve the post-silicon testing problem by aligning path delays
using the already-existing tuning buffers in the circuit. In addition, we only
test representative paths and the delays of other paths are estimated by
statistical delay prediction. Exper- imental results demonstrate that the
proposed method can reduce the number of frequency stepping iterations by more
than 94% with only a slight yield loss.Comment: ACM/IEEE Design Automation Conference (DAC), June 201
Resource Management Algorithms for Computing Hardware Design and Operations: From Circuits to Systems
The complexity of computation hardware has increased at an unprecedented rate for the last few decades. On the computer chip level, we have entered the era of multi/many-core processors made of billions of transistors. With transistor budget of this scale, many functions are integrated into a single chip. As such, chips today consist of many heterogeneous cores with intensive interaction among these cores. On the circuit level, with the end of Dennard scaling, continuously shrinking process technology has imposed a grand challenge on power density. The variation of circuit further exacerbated the problem by consuming a substantial time margin. On the system level, the rise of Warehouse Scale Computers and Data Centers have put resource management into new perspective. The ability of dynamically provision computation resource in these gigantic systems is crucial to their performance. In this thesis, three different resource management algorithms are discussed. The first algorithm assigns adaptivity resource to circuit blocks with a constraint on the overhead. The adaptivity improves resilience of the circuit to variation in a cost-effective way. The second algorithm manages the link bandwidth resource in application specific Networks-on-Chip. Quality-of-Service is guaranteed for time-critical traffic in the algorithm with an emphasis on power. The third algorithm manages the computation resource of the data center with precaution on the ill states of the system. Q-learning is employed to meet the dynamic nature of the system and Linear Temporal Logic is leveraged as a tool to describe temporal constraints. All three algorithms are evaluated by various experiments. The experimental results are compared to several previous work and show the advantage of our methods
- …