85 research outputs found
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Evolution of Test Programs Exploiting a FSM Processor Model
Microprocessor testing is becoming a challenging task, due to the increasing complexity of modern architectures. Nowadays, most architectures are tackled with a combination of scan chains and Software-Based Self-Test (SBST) methodologies. Among SBST techniques, evolutionary feedback-based ones prove effective in microprocessor testing: their main disadvantage, however, is the considerable time required to generate suitable test programs. A novel evolutionary-based approach, able to appreciably reduce the generation time, is presented. The proposed method exploits a high-level representation of the architecture under test and a dynamically built Finite State Machine (FSM) model to assess fault coverage without resorting to time-expensive simulations on low-level models. Experimental results, performed on an OpenRISC processor, show that the resulting test obtains a nearly complete fault coverage against the targeted fault mode
Test Generation Based on CLP
Functional ATPGs based on simulation are fast,
but generally, they are unable to cover corner cases, and
they cannot prove untestability. On the contrary, functional
ATPGs exploiting formal methods, being exhaustive,
cover corner cases, but they tend to suffer of the state
explosion problem when adopted for verifying large designs.
In this context, we have defined a functional ATPG
that relies on the joint use of pseudo-deterministic simulation
and Constraint Logic Programming (CLP), to
generate high-quality test sequences for solving complex
problems. Thus, the advantages of both simulation-based
and static-based verification techniques are preserved, while
their respective drawbacks are limited. In particular, CLP,
a form of constraint programming in which logic programming
is extended to include concepts from constraint satisfaction,
is well-suited to be jointly used with simulation. In
fact, information learned during design exploration by simulation
can be effectively exploited for guiding the search of
a CLP solver towards DUV areas not covered yet. The test
generation procedure relies on constraint logic programming
(CLP) techniques in different phases of the test generation
procedure.
The ATPG framework is composed of three functional
ATPG engines working on three different models of the
same DUV: the hardware description language (HDL)
model of the DUV, a set of concurrent EFSMs extracted
from the HDL description, and a set of logic constraints
modeling the EFSMs. The EFSM paradigm has been selected
since it allows a compact representation of the DUV
state space that limits the state explosion problem typical
of more traditional FSMs. The first engine is randombased,
the second is transition-oriented, while the last is
fault-oriented.
The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition
coverage is desired as a necessary condition for fault
detection, while the bit coverage functional fault model
is used to evaluate the effectiveness of the generated test
patterns by measuring the related fault coverage.
A random engine is first used to explore the DUV state
space by performing a simulation-based random walk. This
allows us to quickly fire easy-to-traverse (ETT) transitions
and, consequently, to quickly cover easy-to-detect (ETD)
faults. However, the majority of hard-to-traverse (HTT)
transitions remain, generally, uncovered.
Thus, a transition-oriented engine is applied to
cover the remaining HTT transitions by exploiting a
learning/backjumping-based strategy.
The ATPG works on a special kind of EFSM, called
SSEFSM, whose transitions present the most uniformly
distributed probability of being activated and can be effectively
integrated to CLP, since it allows the ATPG to invoke
the constraint solver when moving between EFSM states.
A constraint logic programming-based (CLP) strategy is
adopted to deterministically generate test vectors that satisfy
the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver
is required to generate opportune values for PIs that enable
the SSEFSM to move across such a transition.
Moreover, backjumping, also known as nonchronological
backtracking, is a special kind of backtracking
strategy which rollbacks from an unsuccessful
situation directly to the cause of the failure. Thus,
the transition-oriented engine deterministically backjumps
to the source of failure when a transition, whose guard
depends on previously set registers, cannot be traversed.
Next it modifies the EFSM configuration to satisfy the
condition on registers and successfully comes back to the
target state to activate the transition.
The transition-oriented engine generally allows us to
achieve 100% transition coverage. However, 100% transition
coverage does not guarantee to explore all DUV corner
cases, thus some hard-to-detect (HTD) faults can escape
detection preventing the achievement of 100% fault coverage.
Therefore, the CLP-based fault-oriented engine is finally
applied to focus on the remaining HTD faults.
The CLP solver is used to deterministically search for
sequences that propagate the HTD faults observed, but not
detected, by the random and the transition-oriented engine.
The fault-oriented engine needs a CLP-based representation
of the DUV, and some searching functions to generate
test sequences. The CLP-based representation is automatically
derived from the S2EFSM models according to the
defined rules, which follow the syntax of the ECLiPSe CLP
solver. This is not a trivial task, since modeling the
evolution in time of an EFSM by using logic constraints
is really different with respect to model the same behavior
by means of a traditional HW description language. At
first, the concept of time steps is introduced, required to
model the SSEFSM evolution through the time via CLP.
Then, this study deals with modeling of logical variables
and constraints to represent enabling functions and update
functions of the SSEFSM.
Formal tools that exhaustively search for a solution frequently
run out of resources when the state space to be analyzed
is too large. The same happens for the CLP solver,
when it is asked to find a propagation sequence on large sequential
designs. Therefore we have defined a set of strategies
that allow to prune the search space and to manage the
complexity problem for the solver
Recommended from our members
Efficient verification/testing of system-on-chip through fault grading and analog behavioral modeling
textThis dissertation presents several cost-effective production test solutions using fault grading and mixed-signal design verification cases enabled by analog behavioral modeling. Although the latest System-on-Chip (SOC) is getting denser, faster, and more complex, the manufacturing technology is dominated by subtle defects that are introduced by small-scale technology. Thus, SOC requires more mature testing strategies. By performing various types of testing, better quality SoC can be manufactured, but test resources are too limited to accommodate all those tests. To create the most efficient production test flow, any redundant or ineffective tests need to be removed or minimized.
Chapter 3 proposes new method of test data volume reduction by combining the nonlinear property of feedback shift register (FSR) and dictionary coding. Instead of using the nonlinear FSR for actual hardware implementation, the expanded test set by nonlinear expansion is used as the one-column test sets and provides big reduction ratio for the test data volume. The experimental results show the combined method reduced the total test data volume and increased the fault coverage. Due to the increased number of test patterns, total test time is increased.
Chapter 4 addresses a whole process of functional fault grading. Fault grading has always been a ”desire-to-have” flow because it can bring up significant value for cost saving and yield analysis. However, it is very hard to perform the fault grading on the complex large scale SOC. A commercial tool called Z01X is used as a fault grading platform, and whole fault grading process is coordinated and each detailed execution is performed. Simulation- based functional fault grading identifies the quality of the given functional tests against the static faults and transition delay faults. With the structural tests and functional tests, functional fault grading can indicate the way to achieve the same test coverage by spending minimal test time. Compared to the consumed time and resource for fault grading, the contribution to the test time saving might not be acceptable as very promising, but the fault grading data can be reused for yield analysis and test flow optimization. For the final production testing, confident decisions on the functional test selection can be made based on the fault grading results.
Chapter 5 addresses the challenges of Package-on-Package (POP) testing. Because POP devices have pins on both the top and the bottom of the package, the increased test pins require more test channels to detect packaging defects. Boundary scan chain testing is used to detect those continuity defects by relying on leakage current from the power supply. This proposed test scheme does not require direct test channels on the top pins. Based on the counting algorithm, minimal numbers of test cycles are generated, and the test achieved full test coverage for any combinations of pin-to-pin shortage defects on the top pins of the POP package. The experimental results show about 10 times increased leakage current from the shorted defect. Also, it can be expanded to multi-site testing with less test channels for high-volume production.
Fault grading is applied within different structural test categories in Chapter 6. Stuck-at faults can be considered as TDFs having infinite delay. Hence, the TDF Automatic Test Pattern Generation (ATPG) tests can detect both TDFs and stuck-at faults. By removing the stuck-at faults being detected by the given TDF ATPG tests, the tests that target stuck-at faults can be reduced, and the reduced stuck-at fault set results in fewer stuck-at ATPG patterns. The structural test time is reduced while keeping the same test coverage. This TDF grading is performed with the same ATPG tool used to generate the stuck-at and TDF ATPG tests.
To expedite the mixed-signal design verification of complex SoC, analog behavioral modeling methods and strategies are addressed in Chapter 7 and case studies for detailed verification with actual mixed-signal design are ad- dressed in Chapter 8. Analog modeling effort can enhance verification quality for a mixed-signal design with less turnaround time, and it enables compatible integration of the mixed-signal design cores into the SoC. The modeling process may reveal any potential design errors or incorrect testbench setup, and it results in minimizing unnecessary debugging time for quality devices.
Two mixed-signal design cases were verified by me using the analog models. A fully hierarchical digital-to-analog converter (DAC) model is implemented and silicon mismatches caused by process variation are modeled and inserted into the DAC model, and the calibration algorithm for the DAC is successfully verified by model-based simulation at the full DAC-level. When the mismatch amount is increased and exceeded the calibration capability of the DAC, the simulation results show the increased calibration error with some outliers. This verification method can identify the saturation range of the DAC and predict the yield of the devices from process variation.
A phase-locked loop (PLL) design cases were also verified by me using the analog model. Both open-loop PLL model and closed-loop PLL model cases are presented. Quick bring-up of open-loop PLL model provides low simulation overhead for widely-used PLLs in the SOC and enables early starting of design verification for the upper-level design using the PLL generated clocks. Accurate closed-loop PLL model is implemented for DCO-based PLL design, and the mixed-simulation with analog models and schematic designs enables flexible analog verification. Only focused analog design block is set to the schematic design and the rest of the analog design is replaced by the analog model. Then, this scaled-down SPICE simulation is performed about 10 times to 100 times faster than full-scale SPICE simulation. The analog model of the focused block is compared with the scaled-down SPICE simulation result and the quality of the model is iteratively enhanced. Hence, the analog model enables both compatible integration and flexible analog design verification.
This dissertation contributes to reduce test time and to enhance test quality, and helps to set up efficient production testing flows. Depending on the size and performance of CUT, proper testing schemes can maximize the efficiency of production testing. The topics covered in this dissertation can be used in optimizing the test flow and selecting the final production tests to achieve maximum test capability. In addition, the strategies and benefits of analog behavioral modeling techniques that I implemented are presented, and actual verification cases shows the effectiveness of analog modeling for better quality SoC products.Electrical and Computer Engineerin
Persistent monitoring of digital ICs to verify hardware trust
The specialization of the semiconductor industry has resulted in a global Integrated Circuit (IC) supply chain that is susceptible to hardware Trojans - malicious circuitry that is embedded into the chip during the design cycle. This nefarious attack could compromise the missioncritical systems which implement these devices. While a trusted domestic IC supply chain exists with resources such as the Trusted Foundry Program, it\u27s highly desirable to utilize the high yield, fast turn-around time, low cost, and leading-edge technology of the global IC supply chain. Research into the verification of hardware trust has made significant progress in recent years but is still far from a single, comprehensive solution. Most proposed solutions are one-time implementable methods that attempt to detect hardware Trojans during the verification stage of the IC development process. While this is a desirable solution, it\u27s not realistic given the current limitations of hardware Trojan detection techniques. We propose a more comprehensive solution that involves the persistent verification of hardware trust in the field, in addition to several one-time methods implemented during IC verification. We define a persistent verification framework that involves the use of a few ICs from a secure process flow to persistently monitor and verify the operation of several untrusted ICs from the global supply chain. This allows the system integrator to realize the benefits of the global IC supply chain while maintaining the integrity of the system. We develop a system monitor which filters the IO of untrusted digital ICs for a set of patterns, which we refer to as digital signal signatures, to verify the operation of the devices
Doctor of Philosophy
dissertationThe design of integrated circuit (IC) requires an exhaustive verification and a thorough test mechanism to ensure the functionality and robustness of the circuit. This dissertation employs the theory of relative timing that has the advantage of enabling designers to create designs that have significant power and performance over traditional clocked designs. Research has been carried out to enable the relative timing approach to be supported by commercial electronic design automation (EDA) tools. This allows asynchronous and sequential designs to be designed using commercial cad tools. However, two very significant holes in the flow exist: the lack of support for timing verification and manufacturing test. Relative timing (RT) utilizes circuit delay to enforce and measure event sequencing on circuit design. Asynchronous circuits can optimize power-performance product by adjusting the circuit timing. A thorough analysis on the timing characteristic of each and every timing path is required to ensure the robustness and correctness of RT designs. All timing paths have to conform to the circuit timing constraints. This dissertation addresses back-end design robustness by validating full cyclical path timing verification with static timing analysis and implementing design for testability (DFT). Circuit reliability and correctness are necessary aspects for the technology to become commercially ready. In this study, scan-chain, a commercial DFT implementation, is applied to burst-mode RT designs. In addition, a novel testing approach is developed along with scan-chain to over achieve 90% fault coverage on two fault models: stuck-at fault model and delay fault model. This work evaluates the cost of DFT and its coverage trade-off then determines the best implementation. Designs such as a 64-point fast Fourier transform (FFT) design, an I2C design, and a mixed-signal design are built to demonstrate power, area, performance advantages of the relative timing methodology and are used as a platform for developing the backend robustness. Results are verified by performing post-silicon timing validation and test. This work strengthens overall relative timed circuit flow, reliability, and testability
- …