8 research outputs found
On Timing Model Extraction and Hierarchical Statistical Timing Analysis
In this paper, we investigate the challenges to apply Statistical Static
Timing Analysis (SSTA) in hierarchical design flow, where modules supplied by
IP vendors are used to hide design details for IP protection and to reduce the
complexity of design and verification. For the three basic circuit types,
combinational, flip-flop-based and latch-controlled, we propose methods to
extract timing models which contain interfacing as well as compressed internal
constraints. Using these compact timing models the runtime of full-chip timing
analysis can be reduced, while circuit details from IP vendors are not exposed.
We also propose a method to reconstruct the correlation between modules during
full-chip timing analysis. This correlation can not be incorporated into timing
models because it depends on the layout of the corresponding modules in the
chip. In addition, we investigate how to apply the extracted timing models with
the reconstructed correlation to evaluate the performance of the complete
design. Experiments demonstrate that using the extracted timing models and
reconstructed correlation full-chip timing analysis can be several times faster
than applying the flattened circuit directly, while the accuracy of statistical
timing analysis is still well maintained
Non-invasive IC tomography using spatial correlations
We introduce a new methodology for post-silicon characterization of the gate-level variations in a manufactured Integrated Circuit (IC). The estimated characteristics are based on the power and the delay measurements that are affected by the process variations. The power (delay) variations are spatially correlated. Thus, there exists a basis in which variations are sparse. The sparse representation suggests using the L1-regularization (the compressive sensing theory). We show how to use the compressive sensing theory to improve post-silicon characterization. We also address the problem by adding spatial constraints directly to the traditional L2-minimization.
The proposed methodology is fast, inexpensive, non-invasive, and applicable to legacy designs. Noninvasive IC characterization has a range of emerging applications, including post-silicon optimization, IC identification, and variations' modeling/simulations. The evaluation results on standard benchmark circuits show that, in average, the gate level characteristics estimation accuracy can be improved by more than two times using the proposed methods
Design Disjunction for Resilient Reconfigurable Hardware
Contemporary reconfigurable hardware devices have the capability to achieve high performance, power efficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supporting efficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key role in designing future dependable hardware architectures. However, the exponential increase in density and complexity of current commercial SRAM-based field-programmable gate arrays (FPGAs) has escalated the overhead associated with dynamic runtime design adaptation. Traditionally, static modular redundancy techniques are considered to surmount this limitation; however, they can incur substantial overheads in both area and power requirements. To achieve a better trade-off among performance, area, power, and reliability, this research proposes design-time approaches that enable fine selection of redundancy level based on target reliability goals and autonomous adaptation to runtime demands. To achieve this goal, three studies were conducted: First, a graph and set theoretic approach, named Hypergraph-Cover Diversity (HCD), is introduced as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-free hypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets of resources, each of which can be utilized by the same synthesized application netlist. The diverse implementations provide reconfiguration-based resilience throughout the system lifetime while avoiding the significant overheads associated with runtime placement and routing phases. Evaluation on a Motion-JPEG image compression core using a Xilinx 7-series-based FPGA hardware platform has demonstrated the potential of the proposed FT method to achieve 37.5% area saving and up to 66% reduction in power consumption compared to the frequently-used TMR scheme while providing superior fault tolerance. Second, Design Disjunction based on non-adaptive group testing is developed to realize a low-overhead fault tolerant system capable of handling self-testing and self-recovery using runtime partial reconfiguration. Reconfiguration is guided by resource grouping procedures which employ non-linear measurements given by the constructive property of f-disjunctness to extend runtime resilience to a large fault space and realize a favorable range of tradeoffs. Disjunct designs are created using the mosaic convergence algorithm developed such that at least one configuration in the library evades any occurrence of up to d resource faults, where d is lower-bounded by f. Experimental results for a set of MCNC and ISCAS benchmarks have demonstrated f-diagnosability at the individual slice level with average isolation resolution of 96.4% (94.4%) for f=1 (f=2) while incurring an average critical path delay impact of only 1.49% and area cost roughly comparable to conventional 2-MR approaches. Finally, the proposed Design Disjunction method is evaluated as a design-time method to improve timing yield in the presence of large random within-die (WID) process variations for application with a moderately high production capacity
Efficient Monte Carlo Based Methods for Variability Aware Analysis and Optimization of Digital Circuits.
Process variability is of increasing concern in modern nanometer-scale CMOS. The
suitability of Monte Carlo based algorithms for efficient analysis and optimization of
digital circuits under variability is explored in this work. Random sampling based Monte
Carlo techniques incur high cost of computation, due to the large sample size required to
achieve target accuracy. This motivates the need for intelligent sample selection
techniques to reduce the number of samples. As these techniques depend on information
about the system under analysis, there is a need to tailor the techniques to fit the specific
application context. We propose efficient smart sampling based techniques for timing and
leakage power consumption analysis of digital circuits. For the case of timing analysis, we
show that the proposed method requires 23.8X fewer samples on average to achieve
comparable accuracy as a random sampling approach, for benchmark circuits studied. It is
further illustrated that the parallelism available in such techniques can be exploited using
parallel machines, especially Graphics Processing Units. Here, we show that SH-QMC
implemented on a Multi GPU is twice as fast as a single STA on a CPU for benchmark
circuits considered. Next we study the possibility of using such information from
statistical analysis to optimize digital circuits under variability, for example to achieve
minimum area on silicon though gate sizing while meeting a timing constraint. Though
several techniques to optimize circuits have been proposed in literature, it is not clear how
much gains are obtained in these approaches specifically through utilization of statistical
information. Therefore, an effective lower bound computation technique is proposed to
enable efficient comparison of statistical design optimization techniques. It is shown that
even techniques which use only limited statistical information can achieve results to
within 10% of the proposed lower bound. We conclude that future optimization research
should shift focus from use of more statistical information to achieving more efficiency
and parallelism to obtain speed ups.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78936/1/tvvin_1.pd
CAD Techniques for Robust FPGA Design Under Variability
The imperfections in the semiconductor fabrication process and uncertainty in operating environment of VLSI circuits have emerged as critical challenges for the semiconductor industry. These are generally termed as process and environment variations, which lead to uncertainty in
performance and unreliable operation of the circuits. These problems have been
further aggravated in scaled nanometer technologies due to increased process
variations and reduced operating voltage.
Several techniques have been proposed recently for designing digital VLSI circuits
under variability. However, most of them have targeted ASICs and custom designs.
The flexibility of reconfiguration and unknown end application in FPGAs
make design under variability different for FPGAs compared to
ASICs and custom designs, and the techniques proposed for ASICs and custom designs cannot be directly applied
to FPGAs. An important design consideration is to minimize the modifications in architecture and circuit
to reduce the cost of changing the existing FPGA architecture and circuit.
The focus of this work can be divided into three principal categories, which are, improving
timing yield under process variations, improving power yield under process variations and improving the voltage profile
in the FPGA power grid.
The work on timing yield improvement proposes routing architecture enhancements along with CAD techniques to
improve the timing yield of FPGA designs. The work on power yield improvement for FPGAs selects a low power dual-Vdd FPGA design
as the baseline FPGA architecture for developing power yield enhancement techniques. It proposes CAD techniques to improve the
power yield of FPGAs. A mathematical programming technique is proposed to determine the parameters
of the buffers in the interconnect such as the sizes of the transistors and threshold voltage of the transistors, all
within constraints, such that the leakage variability is minimized under delay constraints.
Two CAD techniques are investigated and proposed to improve the supply voltage profile of
the power grids in FPGAs. The first technique is a place and route technique and the second technique
is a logic clustering technique to reduce IR-drops and spatial variation of supply voltage in the power grid
Design for prognostics and security in field programmable gate arrays (FPGAs).
There is an evolutionary progression of Field Programmable Gate Arrays (FPGAs)
toward more complex and high power density architectures such as Systems-on-
Chip (SoC) and Adaptive Compute Acceleration Platforms (ACAP). Primarily, this is
attributable to the continual transistor miniaturisation and more innovative and
efficient IC manufacturing processes. Concurrently, degradation mechanism of Bias
Temperature Instability (BTI) has become more pronounced with respect to its
ageing impact. It could weaken the reliability of VLSI devices, FPGAs in particular
due to their run-time reconfigurability. At the same time, vulnerability of FPGAs to
device-level attacks in the increasing cyber and hardware threat environment is also
quadrupling as the susceptible reliability realm opens door for the rogue elements to
intervene. Insertion of highly stealthy and malicious circuitry, called hardware
Trojans, in FPGAs is one of such malicious interventions. On the one hand where
such attacks/interventions adversely affect the security ambit of these devices, they
also undermine their reliability substantially. Hitherto, the security and reliability are
treated as two separate entities impacting the FPGA health. This has resulted in
fragmented solutions that do not reflect the true state of the FPGA operational and
functional readiness, thereby making them even more prone to hardware attacks.
The recent episodes of Spectre and Meltdown vulnerabilities are some of the key
examples. This research addresses these concerns by adopting an integrated
approach and investigating the FPGA security and reliability as two inter-dependent
entities with an additional dimension of health estimation/ prognostics. The design
and implementation of a small footprint frequency and threshold voltage-shift
detection sensor, a novel hardware Trojan, and an online transistor dynamic scaling
circuitry present a viable FPGA security scheme that helps build a strong
microarchitectural level defence against unscrupulous hardware attacks. Augmented
with an efficient Kernel-based learning technique for FPGA health
estimation/prognostics, the optimal integrated solution proves to be more
dependable and trustworthy than the prevalent disjointed approach.Samie, Mohammad (Associate)PhD in Transport System