3,059 research outputs found
Rapid mapping of digital integrated circuit logic gates via multi-spectral backside imaging
Modern semiconductor integrated circuits are increasingly fabricated at
untrusted third party foundries. There now exist myriad security threats of
malicious tampering at the hardware level and hence a clear and pressing need
for new tools that enable rapid, robust and low-cost validation of circuit
layouts. Optical backside imaging offers an attractive platform, but its
limited resolution and throughput cannot cope with the nanoscale sizes of
modern circuitry and the need to image over a large area. We propose and
demonstrate a multi-spectral imaging approach to overcome these obstacles by
identifying key circuit elements on the basis of their spectral response. This
obviates the need to directly image the nanoscale components that define them,
thereby relaxing resolution and spatial sampling requirements by 1 and 2 - 4
orders of magnitude respectively. Our results directly address critical
security needs in the integrated circuit supply chain and highlight the
potential of spectroscopic techniques to address fundamental resolution
obstacles caused by the need to image ever shrinking feature sizes in
semiconductor integrated circuits
An efficient logic fault diagnosis framework based on effect-cause approach
Fault diagnosis plays an important role in improving the circuit design process and the
manufacturing yield. With the increasing number of gates in modern circuits, determining
the source of failure in a defective circuit is becoming more and more challenging.
In this research, we present an efficient effect-cause diagnosis framework for
combinational VLSI circuits. The framework consists of three stages to obtain an accurate
and reasonably precise diagnosis. First, an improved critical path tracing algorithm is
proposed to identify an initial suspect list by backtracing from faulty primary outputs
toward primary inputs. Compared to the traditional critical path tracing approach, our
algorithm is faster and exact. Second, a novel probabilistic ranking model is applied to
rank the suspects so that the most suspicious one will be ranked at or near the top. Several
fast filtering methods are used to prune unrelated suspects. Finally, to refine the diagnosis,
fault simulation is performed on the top suspect nets using several common fault models.
The difference between the observed faulty behavior and the simulated behavior is used to rank each suspect. Experimental results on ISCAS85 benchmark circuits show that this
diagnosis approach is efficient both in terms of memory space and CPU time and the
diagnosis results are accurate and reasonably precise
Hardware Fault Injection
Hardware fault injection is the widely accepted approach to evaluate the behavior
of a circuit in the presence of faults. Thus, it plays a key role in the design of robust
circuits. This chapter presents a comprehensive review of hardware fault injection
techniques, including physical and logical approaches. The implementation of
effective fault injection systems is also analyzed. Particular emphasis is made
on the recently developed emulation-based techniques, which can provide large
flexibility along with unprecedented levels of performance. These capabilities
provide a way to tackle reliability evaluation of complex circuits.Publicad
Recommended from our members
Statistical methods for rapid system evaluation under transient and permanent faults
textTraditional solutions for test and reliability do not scale well for modern designs with their size and complexity increasing with every technology generation. Therefore, in order to meet time-to-market requirements as well as acceptable product quality, it is imperative that new methodologies be developed for quickly evaluating a system in the presence of faults. In this research, statistical methods have been employed and implemented to 1) estimate the stuck-at fault coverage of a test sequence and evaluate the given test vector set without the need for complete fault simulation, and 2) analyze design vulnerabilities in the presence of radiation-based (soft) errors. Experimental results show that these statistical techniques can evaluate a system under test orders of magnitude faster than state-of-the-art methods with a small margin of error. In this dissertation, I have introduced novel methodologies that utilize the information from fault-free simulation and partial fault simulation to predict the fault coverage of a long sequence of test vectors for a design under test. These methodologies are practical for functional testing of complex designs under a long sequence of test vectors. Industry is currently seeking efficient solutions for this challenging problem. The last part of this dissertation discusses a statistical methodology for a detailed vulnerability analysis of systems under soft errors. This methodology works orders of magnitude faster than traditional fault injection. In addition, it is shown that the vulnerability factors calculated by this method are closer to complete fault injection (which is the ideal way of soft error vulnerability analysis), compared to statistical fault injection. Performing such a fast soft error vulnerability analysis is very cruicial for companies that design and build safety-critical systems.Electrical and Computer Engineerin
Measurement-based quantum computation with qubit and continuous-variable systems
Quantum computers offer impressive computational speed-ups over their present-day (classical) counterparts. In the measurement-based model, quantum computation is driven by single-site measurements on a large entangled quantum state known as a cluster state. This thesis explores extensions of the measurement-based model for quantum computation in qubit and continuous-variable systems. Within the qubit setting, we consider the task of characterizing how well a small-scale measurement-based quantum device can perform logic gates. We adapt a pre-existing scheme known as randomized benchmarking into the setting of measurement-based quantum computation on a one-dimensional cluster state. A key feature of randomized benchmarking is that it uses random sequences of gates. We show how the intrinsic randomness of measurement-based quantum computation can be harnessed when implementing them. Within the continuous-variable setting, we consider optical cluster states that can be generated with current technology. We propose a compact method for generating universal cluster states based on optical-parametric-oscillator technology. We consider how finite squeezing effects manifest in computation and show that pre-existing measurement-based protocols are suboptimal. We propose new measurement-based protocols that have better noise properties, compactness, and circuit flexibility. As an application, we introduce a measurement-based method for implementing interferometry. In this model, the finite squeezing noise can be dealt with as a photon-loss process. Building further on this work, we investigate the resource requirements of a measurement-based boson-sampling device, proving simultaneous efficiency in time, space, and squeezing (energy) resources. These results offer new insights into how to build, use, and characterize a measurement-based quantum computer
Quantum leakage detection using a model-independent dimension witness
Users of quantum computers must be able to confirm they are indeed
functioning as intended, even when the devices are remotely accessed. In
particular, if the Hilbert space dimension of the components are not as
advertised -- for instance if the qubits suffer leakage -- errors can ensue and
protocols may be rendered insecure. We refine the method of delayed vectors,
adapted from classical chaos theory to quantum systems, and apply it remotely
on the IBMQ platform -- a quantum computer composed of transmon qubits. The
method witnesses, in a model-independent fashion, dynamical signatures of
higher-dimensional processes. We present evidence, under mild assumptions, that
the IBMQ transmons suffer state leakage, with a value no larger than
under a single qubit operation. We also estimate the number
of shots necessary for revealing leakage in a two-qubit system.Comment: 11 pages, 5 figure
Recommended from our members
Efficient verification/testing of system-on-chip through fault grading and analog behavioral modeling
textThis dissertation presents several cost-effective production test solutions using fault grading and mixed-signal design verification cases enabled by analog behavioral modeling. Although the latest System-on-Chip (SOC) is getting denser, faster, and more complex, the manufacturing technology is dominated by subtle defects that are introduced by small-scale technology. Thus, SOC requires more mature testing strategies. By performing various types of testing, better quality SoC can be manufactured, but test resources are too limited to accommodate all those tests. To create the most efficient production test flow, any redundant or ineffective tests need to be removed or minimized.
Chapter 3 proposes new method of test data volume reduction by combining the nonlinear property of feedback shift register (FSR) and dictionary coding. Instead of using the nonlinear FSR for actual hardware implementation, the expanded test set by nonlinear expansion is used as the one-column test sets and provides big reduction ratio for the test data volume. The experimental results show the combined method reduced the total test data volume and increased the fault coverage. Due to the increased number of test patterns, total test time is increased.
Chapter 4 addresses a whole process of functional fault grading. Fault grading has always been a ”desire-to-have” flow because it can bring up significant value for cost saving and yield analysis. However, it is very hard to perform the fault grading on the complex large scale SOC. A commercial tool called Z01X is used as a fault grading platform, and whole fault grading process is coordinated and each detailed execution is performed. Simulation- based functional fault grading identifies the quality of the given functional tests against the static faults and transition delay faults. With the structural tests and functional tests, functional fault grading can indicate the way to achieve the same test coverage by spending minimal test time. Compared to the consumed time and resource for fault grading, the contribution to the test time saving might not be acceptable as very promising, but the fault grading data can be reused for yield analysis and test flow optimization. For the final production testing, confident decisions on the functional test selection can be made based on the fault grading results.
Chapter 5 addresses the challenges of Package-on-Package (POP) testing. Because POP devices have pins on both the top and the bottom of the package, the increased test pins require more test channels to detect packaging defects. Boundary scan chain testing is used to detect those continuity defects by relying on leakage current from the power supply. This proposed test scheme does not require direct test channels on the top pins. Based on the counting algorithm, minimal numbers of test cycles are generated, and the test achieved full test coverage for any combinations of pin-to-pin shortage defects on the top pins of the POP package. The experimental results show about 10 times increased leakage current from the shorted defect. Also, it can be expanded to multi-site testing with less test channels for high-volume production.
Fault grading is applied within different structural test categories in Chapter 6. Stuck-at faults can be considered as TDFs having infinite delay. Hence, the TDF Automatic Test Pattern Generation (ATPG) tests can detect both TDFs and stuck-at faults. By removing the stuck-at faults being detected by the given TDF ATPG tests, the tests that target stuck-at faults can be reduced, and the reduced stuck-at fault set results in fewer stuck-at ATPG patterns. The structural test time is reduced while keeping the same test coverage. This TDF grading is performed with the same ATPG tool used to generate the stuck-at and TDF ATPG tests.
To expedite the mixed-signal design verification of complex SoC, analog behavioral modeling methods and strategies are addressed in Chapter 7 and case studies for detailed verification with actual mixed-signal design are ad- dressed in Chapter 8. Analog modeling effort can enhance verification quality for a mixed-signal design with less turnaround time, and it enables compatible integration of the mixed-signal design cores into the SoC. The modeling process may reveal any potential design errors or incorrect testbench setup, and it results in minimizing unnecessary debugging time for quality devices.
Two mixed-signal design cases were verified by me using the analog models. A fully hierarchical digital-to-analog converter (DAC) model is implemented and silicon mismatches caused by process variation are modeled and inserted into the DAC model, and the calibration algorithm for the DAC is successfully verified by model-based simulation at the full DAC-level. When the mismatch amount is increased and exceeded the calibration capability of the DAC, the simulation results show the increased calibration error with some outliers. This verification method can identify the saturation range of the DAC and predict the yield of the devices from process variation.
A phase-locked loop (PLL) design cases were also verified by me using the analog model. Both open-loop PLL model and closed-loop PLL model cases are presented. Quick bring-up of open-loop PLL model provides low simulation overhead for widely-used PLLs in the SOC and enables early starting of design verification for the upper-level design using the PLL generated clocks. Accurate closed-loop PLL model is implemented for DCO-based PLL design, and the mixed-simulation with analog models and schematic designs enables flexible analog verification. Only focused analog design block is set to the schematic design and the rest of the analog design is replaced by the analog model. Then, this scaled-down SPICE simulation is performed about 10 times to 100 times faster than full-scale SPICE simulation. The analog model of the focused block is compared with the scaled-down SPICE simulation result and the quality of the model is iteratively enhanced. Hence, the analog model enables both compatible integration and flexible analog design verification.
This dissertation contributes to reduce test time and to enhance test quality, and helps to set up efficient production testing flows. Depending on the size and performance of CUT, proper testing schemes can maximize the efficiency of production testing. The topics covered in this dissertation can be used in optimizing the test flow and selecting the final production tests to achieve maximum test capability. In addition, the strategies and benefits of analog behavioral modeling techniques that I implemented are presented, and actual verification cases shows the effectiveness of analog modeling for better quality SoC products.Electrical and Computer Engineerin
- …