18 research outputs found
Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge.
Today, design verification is by far the most resource and time-consuming activity of any new digital integrated circuit development. Within this area, the vast majority of the verification effort in industry relies on simulation platforms, which are implemented either in hardware or software. A "simulator" includes a model of each component of a design and has the capability of simulating its behavior under any input scenario provided by an engineer. Thus, simulators are deployed to evaluate the behavior of a design under as many input scenarios as possible and to identify and debug all incorrect functionality. Two features are critical in simulators for the validation effort to be effective: performance and checking/debugging capabilities. A wide range of simulator platforms are available today: on one end of the spectrum there are software-based simulators, providing a very rich software infrastructure for checking and debugging the design's functionality, but executing only at 1-10 simulation cycles per second (while actual chips operate at GHz speeds). At the other end of the spectrum, there are hardware-based platforms, such as accelerators, emulators and even prototype silicon chips, providing higher performances by 4 to 9 orders of magnitude, at the cost of very limited or non-existent checking/debugging capabilities. As a result, today, simulation-based validation is crippled: one can either have satisfactory performance on hardware-accelerated platforms or critical infrastructures for checking/debugging on software simulators, but not both.
This dissertation brings together these two ends of the spectrum by presenting solutions that offer high-performance simulation with effective checking and debugging capabilities. Specifically, it addresses the performance challenge of software simulators by leveraging inexpensive off-the-shelf graphics processors as massively parallel execution substrates, and then exposing the parallelism inherent in the design model to that architecture. For hardware-based platforms, the dissertation provides solutions that offer enhanced checking and debugging capabilities by abstracting the relevant data to be logged during simulation so to minimize the cost of collection, transfer and processing. Altogether, the contribution of this dissertation has the potential to solve the challenge of digital design verification by enabling effective high-performance simulation-based validation.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99781/1/dchatt_1.pd
Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021
The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
Recommended from our members
Efficient verification/testing of system-on-chip through fault grading and analog behavioral modeling
textThis dissertation presents several cost-effective production test solutions using fault grading and mixed-signal design verification cases enabled by analog behavioral modeling. Although the latest System-on-Chip (SOC) is getting denser, faster, and more complex, the manufacturing technology is dominated by subtle defects that are introduced by small-scale technology. Thus, SOC requires more mature testing strategies. By performing various types of testing, better quality SoC can be manufactured, but test resources are too limited to accommodate all those tests. To create the most efficient production test flow, any redundant or ineffective tests need to be removed or minimized.
Chapter 3 proposes new method of test data volume reduction by combining the nonlinear property of feedback shift register (FSR) and dictionary coding. Instead of using the nonlinear FSR for actual hardware implementation, the expanded test set by nonlinear expansion is used as the one-column test sets and provides big reduction ratio for the test data volume. The experimental results show the combined method reduced the total test data volume and increased the fault coverage. Due to the increased number of test patterns, total test time is increased.
Chapter 4 addresses a whole process of functional fault grading. Fault grading has always been a ”desire-to-have” flow because it can bring up significant value for cost saving and yield analysis. However, it is very hard to perform the fault grading on the complex large scale SOC. A commercial tool called Z01X is used as a fault grading platform, and whole fault grading process is coordinated and each detailed execution is performed. Simulation- based functional fault grading identifies the quality of the given functional tests against the static faults and transition delay faults. With the structural tests and functional tests, functional fault grading can indicate the way to achieve the same test coverage by spending minimal test time. Compared to the consumed time and resource for fault grading, the contribution to the test time saving might not be acceptable as very promising, but the fault grading data can be reused for yield analysis and test flow optimization. For the final production testing, confident decisions on the functional test selection can be made based on the fault grading results.
Chapter 5 addresses the challenges of Package-on-Package (POP) testing. Because POP devices have pins on both the top and the bottom of the package, the increased test pins require more test channels to detect packaging defects. Boundary scan chain testing is used to detect those continuity defects by relying on leakage current from the power supply. This proposed test scheme does not require direct test channels on the top pins. Based on the counting algorithm, minimal numbers of test cycles are generated, and the test achieved full test coverage for any combinations of pin-to-pin shortage defects on the top pins of the POP package. The experimental results show about 10 times increased leakage current from the shorted defect. Also, it can be expanded to multi-site testing with less test channels for high-volume production.
Fault grading is applied within different structural test categories in Chapter 6. Stuck-at faults can be considered as TDFs having infinite delay. Hence, the TDF Automatic Test Pattern Generation (ATPG) tests can detect both TDFs and stuck-at faults. By removing the stuck-at faults being detected by the given TDF ATPG tests, the tests that target stuck-at faults can be reduced, and the reduced stuck-at fault set results in fewer stuck-at ATPG patterns. The structural test time is reduced while keeping the same test coverage. This TDF grading is performed with the same ATPG tool used to generate the stuck-at and TDF ATPG tests.
To expedite the mixed-signal design verification of complex SoC, analog behavioral modeling methods and strategies are addressed in Chapter 7 and case studies for detailed verification with actual mixed-signal design are ad- dressed in Chapter 8. Analog modeling effort can enhance verification quality for a mixed-signal design with less turnaround time, and it enables compatible integration of the mixed-signal design cores into the SoC. The modeling process may reveal any potential design errors or incorrect testbench setup, and it results in minimizing unnecessary debugging time for quality devices.
Two mixed-signal design cases were verified by me using the analog models. A fully hierarchical digital-to-analog converter (DAC) model is implemented and silicon mismatches caused by process variation are modeled and inserted into the DAC model, and the calibration algorithm for the DAC is successfully verified by model-based simulation at the full DAC-level. When the mismatch amount is increased and exceeded the calibration capability of the DAC, the simulation results show the increased calibration error with some outliers. This verification method can identify the saturation range of the DAC and predict the yield of the devices from process variation.
A phase-locked loop (PLL) design cases were also verified by me using the analog model. Both open-loop PLL model and closed-loop PLL model cases are presented. Quick bring-up of open-loop PLL model provides low simulation overhead for widely-used PLLs in the SOC and enables early starting of design verification for the upper-level design using the PLL generated clocks. Accurate closed-loop PLL model is implemented for DCO-based PLL design, and the mixed-simulation with analog models and schematic designs enables flexible analog verification. Only focused analog design block is set to the schematic design and the rest of the analog design is replaced by the analog model. Then, this scaled-down SPICE simulation is performed about 10 times to 100 times faster than full-scale SPICE simulation. The analog model of the focused block is compared with the scaled-down SPICE simulation result and the quality of the model is iteratively enhanced. Hence, the analog model enables both compatible integration and flexible analog design verification.
This dissertation contributes to reduce test time and to enhance test quality, and helps to set up efficient production testing flows. Depending on the size and performance of CUT, proper testing schemes can maximize the efficiency of production testing. The topics covered in this dissertation can be used in optimizing the test flow and selecting the final production tests to achieve maximum test capability. In addition, the strategies and benefits of analog behavioral modeling techniques that I implemented are presented, and actual verification cases shows the effectiveness of analog modeling for better quality SoC products.Electrical and Computer Engineerin
Kiel Declarative Programming Days 2013
This report contains the papers presented at the Kiel Declarative Programming Days 2013, held in Kiel (Germany) during September 11-13, 2013. The Kiel Declarative Programming Days 2013 unified the following events: * 20th International Conference on Applications of Declarative Programming and Knowledge Management (INAP 2013) * 22nd International Workshop on Functional and (Constraint) Logic Programming (WFLP 2013) * 27th Workshop on Logic Programming (WLP 2013) All these events are centered around declarative programming, an advanced paradigm for the modeling and solving of complex problems. These specification and implementation methods attracted increasing attention over the last decades, e.g., in the domains of databases and natural language processing, for modeling and processing combinatorial problems, and for high-level programming of complex, in particular, knowledge-based systems
Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022
The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022
The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
Towards Automated Security Validation for Hardware Designs
Hardware provides the foundation of trust for computer systems. Defects in hardware designs routinely cause vulnerabilities that are exploitable by malicious software and compromise the security of the entire system. While mature hardware validation tools exist, they were primarily designed for checking functional correctness. How to systematically detect security-critical defects remains an open and challenging question.In this dissertation, I develop formal methods and practical tools for automated hardware security validation. To identify and develop security-critical properties for hardware design, I developed SCIFinder, a methodology that leverages known vulnerabilities to mine and learn security invariants. I show that security vulnerabilities together with machine learning techniques can give us a set of security properties to detect both known and unknown security bugs in the OR1200 processor. I also proposed another method to develop security-critical properties by leveraging existing ones, and I built a tool, Transys, to translate security properties across similar or different versions of hardware designs. I demonstrate that translating security properties across AES hardware, RSA hardware and RISC processors is feasible and light-weight. Given the security properties, I developed Coppelia to validate the security of hardware designs. I proposed a hardware-oriented backward symbolic execution strategy to find violations and generate exploit programs. I successfully generate exploits for known security bugs on the OR1200 processor, and discovered and generated exploit programs for 4 unknown bugs across two different processors and architectures.Doctor of Philosoph
Realizability of embedded controllers: from hybrid models to correct implementations
Un controller embedded \ue8 un dispositivo (ovvero, un'opportuna combinazione di componenti hardware e software) che, immerso in un ambiente dinamico, deve reagire alle variazioni ambientali in tempo reale. I controller embedded sono largamente adottati in molti contesti della vita moderna, dall'automotive all'avionica, dall'elettronica di consumo alle attrezzature mediche. La correttezza di tali controller \ue8 indubbiamente cruciale.
Per la progettazione e per la verifica di un controller embedded, spesso sorge la necessit\ue0 di modellare un intero sistema che includa sia il controller, sia il suo ambiente circostante.
La natura di tale sistema \ue8 ibrido. Esso, infatti, \ue8 ottenuto integrando processi ad eventi discreti (i.e., il controller) e processi a tempo continuo (i.e., l'ambiente). Sistemi di questo tipo sono chiamati cyber-physical (CPS) o sistemi ibridi. Le dinamiche di tali sistemi non possono essere rappresentati efficacemente utilizzando o solo un modello (i.e., rappresentazione) discreto o solo un modello continuo.
Diversi tipi di modelli possono sono stati proposti per descrivere i sistemi ibridi. Questi si concentrano su obiettivi diversi: modelli dettagliati sono eccellenti per la simulazione del sistema, ma non sono adatti per la sua verifica; modelli meno dettagliati sono eccellenti per la verifica, ma non sono convenienti per i successivi passi di raffinamento richiesti per la progettazione del sistema, e cos\uec via.
Tra tutti questi modelli, gli Automi Ibridi (HA) [8, 77] rappresentano il formalismo pi\uf9 efficace per la simulazione e la verifica di sistemi ibridi. In particolare, un automa ibrido rappresenta i processi ad eventi discreti per mezzo di macchine a stati finiti (FSM), mentre i processi a tempo continuo sono rappresentati mediante variabili "continue" la cui dinamica \ue8 specificata da equazioni differenziali ordinarie (ODE) o loro generalizzazioni (e.g., inclusioni differenziali).
Sfortunatamente, a causa della loro particolare semantica, esistono diverse difficolt\ue0 nel raffinare un modello basato su automi ibridi in un modello realizzabile e, di conseguenza, esistono difficolt\ue0 nell'automatizzare il flusso di progettazione di sistemi ibridi a partire da automi ibridi.
Gli automi ibridi, infatti, sono considerati dispositivi "perfetti e istantanei". Essi adottano una nozione di tempo e di variabili basata su insiemi "densi" (i.e., l'insieme dei numeri reali). Pertanto, gli automi ibridi possono valutare lo stato (i.e., i valori delle variabili) del sistema in ogni istante, ovvero in ogni infinitesimo di tempo, e con la massima precisione. Inoltre, sono in grado di eseguire computazioni o reagire ad eventi di sincronizzazione in modo istantaneo, andando a cambiare la modalit\ue0 di funzionamento del sistema senza alcun ritardo. Questi aspetti sono convenienti a livello di modellazione, ma nessun dispositivo hardware/software potrebbe implementare correttamente tali comportamenti, indipendentemente dalle sue prestazioni.
In altre parole, il controller modellato potrebbe non essere implementabile, ovvero, esso potrebbe non essere realizzabile affatto.
Questa tesi affronta questo problema proponendo una metodologia completa e gli strumenti necessari per derivare da modelli basati su automi ibridi, modelli realizzabili e le corrispondenti implementazioni corrette.
In un modello realizzabile, il controller analizza lo stato del sistema ad istanti temporali discreti, tipicamente fissati dalla frequenza di clock del processore installato sul dispositivo che implementa il controller. Lo stato del sistema \ue8 dato dai valori delle variabili rilevati dai sensori. Questi valori vengono digitalizzati con precisione finita e propagati al controller che li elabora per decidere se cambiare la modalit\ue0 di funzionamento del sistema. In tal caso, il controller genera segnali che, una volta trasmessi agli attuatori, determineranno il cambiamento della modalit\ue0 di funzionamento del sistema. \uc8 necessario tener presente che i sensori e gli attuatori introducono ritardi che seppur limitati, non possono essere trascurati.An embedded controller is a reactive device (e.g., a suitable combination of hardware and software components) that is embedded in a dynamical environment and has to react to environment changes in real time. Embedded controllers are
widely adopted in many contexts of modern life, from automotive to avionics, from consumer electronics to medical equipment. Noticeably, the correctness of such controllers is crucial. When designing and verifying an embedded controller, often the need arises to model the controller and also its surrounding environment.
The nature of the obtained system is hybrid because of the inclusion of both discrete-event (i.e., controller) and continuous-time (i.e., environment) processes whose dynamics cannot be characterized faithfully using either a discrete or continuous model only. Systems of this kind are named cyber-physical (CPS)
or hybrid systems.
Different types of models may be used to describe hybrid systems and they focus on different objectives: detailed models are excellent for simulation but not suitable for verification, high-level models are excellent for verification but not convenient for refinement, and so forth. Among all these models, hybrid automata (HA) [8, 77] have been proposed as a powerful formalism for the design, simulation and verification of hybrid systems. In particular, a hybrid automaton represents discrete-event processes by means of finite state machines (FSM), whereas
continuous-time processes are represented by using real-numbered variables whose dynamics is specified by (ordinary) differential equation (ODE) or their generalizations (e.g., differential inclusions).
Unfortunately, when the high-level model of the hybrid system is a hybrid automaton, several difficulties should be solved in order to automate the refinement phase in the design flow, because of the classical semantics of hybrid automata. In fact, hybrid automata can be considered perfect and instantaneous devices. They adopt a notion of time and evaluation of continuous variables based on
dense sets of values (usually R, i.e., Reals). Thus, they can sample the state (i.e., value assignments on variables) of the hybrid system at any instant in such a dense set R 650. Further, they are capable of instantaneously evaluating guard constraints or reacting to incoming events by performing changes in the operating mode of the hybrid system without any delay. While these aspects are convenient at the modeling level, any model of an embedded controller that relies for its correctness on such precision and instantaneity cannot be implemented by any
hardware/software device, no matter how fast it is. In other words, the controller is un-realizable, i.e., un-implementable.
This thesis proposes a complete methodology and a framework that allows to derive from hybrid automata proved correct in the hybrid domain, correct realizable models of embedded controllers and the related discrete implementations. In a realizable model, the controller samples the state of the environment at periodic
discrete time instants which, typically, are fixed by the clock frequency of the processor implementing the controller. The state of the environment consists of the current values of the relevant variables as observed by the sensors. These values are digitized with finite precision and reported to the controller that may decide
to switch the operating mode of the environment. In such a case, the controller generates suitable output signals that, once transmitted to the actuators, will effect the desired change in the operating mode. It is worth noting that the sensors will report the current values of the variables and the actuators will effect changes in the rates of evolution of the variables with bounded delays