1,239 research outputs found
Plug & Test at System Level via Testable TLM Primitives
With the evolution of Electronic System Level (ESL) design methodologies, we are experiencing an extensive use of Transaction-Level Modeling (TLM). TLM is a high-level approach to modeling digital systems where details of the communication among modules are separated from the those of the implementation of functional units. This paper represents a first step toward the automatic insertion of testing capabilities at the transaction level by definition of testable TLM primitives. The use of testable TLM primitives should help designers to easily get testable transaction level descriptions implementing what we call a "Plug & Test" design methodology. The proposed approach is intended to work both with hardware and software implementations. In particular, in this paper we will focus on the design of a testable FIFO communication channel to show how designers are given the freedom of trading-off complexity, testability levels, and cos
RT-level fast fault simulator
In this paper a new fast fault simulation technique is presented for calculation of fault propagation through HLPs (High Level Primitives). ROTDDs (Reduced Ordered Ternary Decision Diagrams) are used to describe HLP modules. The technique is implemented in the HTDD RT-level fault simulator. The simulator is evaluated with some ITC99 benchmarks. A hypothesis is proved that a test set coverage of physical failures can be anticipated with high accuracy when RTL fault model takes into account optimization strategies that are used in CAE system applied
A Cross-level Verification Methodology for Digital IPs Augmented with Embedded Timing Monitors
Smart systems are characterized by the integration in a single device of multi-domain subsystems of different technological domains, namely, analog, digital, discrete and power devices, MEMS, and power sources. Such challenges, emerging from the heterogeneous nature of the whole system, combined with the traditional challenges of digital design, directly impact on performance and on propagation delay of digital components. This article proposes a design approach to enhance the RTL model of a given digital component for the integration in smart systems with the automatic insertion of delay sensors, which can detect and correct timing failures. The article then proposes a methodology to verify such added features at system level. The augmented model is abstracted to SystemC TLM, which is automatically injected with mutants (i.e., code mutations) to emulate delays and timing failures. The resulting TLM model is finally simulated to identify timing failures and to verify the correctness of the inserted delay monitors. Experimental results demonstrate the applicability of the proposed design and verification methodology, thanks to an efficient sensor-aware abstraction methodology, by applying the flow to three complex case studies
Innovative Techniques for Testing and Diagnosing SoCs
We rely upon the continued functioning of many electronic devices for our everyday welfare,
usually embedding integrated circuits that are becoming even cheaper and smaller
with improved features. Nowadays, microelectronics can integrate a working computer
with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC).
SoCs are also employed on automotive safety-critical applications, but need to be tested
thoroughly to comply with reliability standards, in particular the ISO26262 functional
safety for road vehicles.
The goal of this PhD. thesis is to improve SoC reliability by proposing innovative
techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals,
and GPUs. The proposed approaches in the sequence appearing in this thesis are described
as follows:
1. Embedded Memory Diagnosis: Memories are dense and complex circuits which
are susceptible to design and manufacturing errors. Hence, it is important to understand
the fault occurrence in the memory array. In practice, the logical and physical
array representation differs due to an optimized design which adds enhancements to
the device, namely scrambling. This part proposes an accurate memory diagnosis
by showing the efforts of a software tool able to analyze test results, unscramble
the memory array, map failing syndromes to cell locations, elaborate cumulative
analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing
syndromes were analyzed as case studies gathered on an industrial automotive
32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually,
and results were confirmed by real photos taken from a microscope.
2. Functional Test Pattern Generation: The key for a successful test is the pattern applied
to the device. They can be structural or functional; the former usually benefits
from embedded test modules targeting manufacturing errors and is only effective
before shipping the component to the client. The latter, on the other hand, can be
applied during mission minimally impacting on performance but is penalized due
to high generation time. However, functional test patterns may benefit for having
different goals in functional mission mode. Part III of this PhD thesis proposes
three different functional test pattern generation methods for CPU cores embedded
in SoCs, targeting different test purposes, described as follows:
a. Functional Stress Patterns: Are suitable for optimizing functional stress during
I
Operational-life Tests and Burn-in Screening for an optimal device reliability
characterization
b. Functional Power Hungry Patterns: Are suitable for determining functional
peak power for strictly limiting the power of structural patterns during manufacturing
tests, thus reducing premature device over-kill while delivering high test
coverage
c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns
with functional ones, allowing its execution periodically during mission.
In addition, an external hardware communicating with a devised SBST was proposed.
It helps increasing in 3% the fault coverage by testing critical Hardly
Functionally Testable Faults not covered by conventional SBST patterns.
An automatic functional test pattern generation exploiting an evolutionary algorithm
maximizing metrics related to stress, power, and fault coverage was employed
in the above-mentioned approaches to quickly generate the desired patterns. The
approaches were evaluated on two industrial cases developed by STMicroelectronics;
8051-based and a 32-bit Power Architecture SoCs. Results show that generation
time was reduced upto 75% in comparison to older methodologies while
increasing significantly the desired metrics.
3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices
are suitable for generating structural patterns, testing and activating mitigation techniques,
and validating robust hardware and software applications. GPGPUs are
known for fast parallel computation used in high performance computing and advanced
driver assistance where reliability is the key point. Moreover, GPGPU manufacturers
do not provide design description code due to content secrecy. Therefore,
commercial fault injectors using the GPGPU model is unfeasible, making radiation
tests the only resource available, but are costly. In the last part of this thesis, we
propose a software implemented fault injector able to inject bit-flip in memory elements
of a real GPGPU. It exploits a software debugger tool and combines the
C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in
program variables. The goal is to validate robust parallel algorithms by studying
fault propagation or activating redundancy mechanisms they possibly embed. The
effectiveness of the tool was evaluated on two robust applications: redundant parallel
matrix multiplication and floating point Fast Fourier Transform
High-level verification flow for a high-level synthesis-based digital logic design
Abstract. High-level synthesis (HLS) is a method for generating register-transfer level (RTL) hardware description of digital logic designs from high-level languages, such as C/C++/SystemC or MATLAB. The performance and productivity benefits of HLS stem from the untimed, high abstraction level input languages. Another advantage is that the design and verification can focus on the features and high-level architecture, instead of the low-level implementation details.
The goal of this thesis was to define and implement a high-level verification (HLV) flow for an HLS design written in C++. The HLV flow takes advantage of the performance and productivity of C++ as opposed to hardware description languages (HDL) and minimises the required RTL verification work. The HLV flow was implemented in the case study of the thesis. The HLS design was verified in a C++ verification environment, and Catapult Coverage was used for pre-HLS coverage closure. Post-HLS verification and coverage closure were done in Universal Verification Methodology (UVM) environment. C++ tests used in the pre-HLS coverage closure were reimplemented in UVM, to get a high initial RTL coverage without manual RTL code analysis. The pre-HLS C++ design was implemented as a predictor into the UVM testbench to verify the equivalence of C++ versus RTL and to speed up post-HLS coverage closure.
Results of the case study show that the HLV flow is feasible to implement in practice. The flow shows significant performance and productivity gains of verification in the C++ domain when compared to UVM. The UVM implementation of a somewhat incomplete set of pre-HLS tests and formal exclusions resulted in an initial post-HLS coverage of 96.90%. The C++ predictor implementation was a valuable tool in post-HLS coverage closure. A total of four weeks of coverage work in pre- and post-HLS phases was required to reach 99% RTL coverage. The total time does not include the time required to build both C++ and UVM verification environments.Korkean tason verifiointivuo korkean tason synteesiin perustuvalle digitaalilogiikkasuunnitelmalle. Tiivistelmä. Korkean tason synteesi (HLS) on menetelmä, jolla generoidaan rekisterisiirtotason (RTL) laitteistokuvausta digitaalisille logiikkasuunnitelmille käyttäen korkean tason ohjelmointikieliä, kuten C-pohjaisia kieliä tai MATLAB:ia. HLS:n suorituskykyyn ja tuottavuuteen liittyvät hyödyt perustuvat ohjelmointikielien tarjoamaan korkeampaan abstraktiotasoon. HLS:ää käyttäen suunnittelu- ja varmennustyö voi keskittyä ominaisuuksiin ja korkean tason arkkitehtuuriin matalan tason yksityiskohtien sijaan.
Tämän diplomityön tavoite oli määritellä ja implementoida korkean tason verifiointivuo (HLV-vuo) C++:lla kirjoitetulle HLS-suunnitelmalle. HLV-vuo hyödyntää ohjelmointikielien tarjoamaa suorituskykyä ja korkeampaa abstraktion tasoa kovonkuvauskielien sijaan ja siten minimoi RTL:n varmennukseen vaadittavaa työtä. HLV vuo implementoitiin tapaustutkimuksessa. HLS-suunnitelma varmennettiin C++ -verifiointiympäristössä, ja Catapult Coveragea käytettiin kattavuuden analysointiin. RTL-kattavuutta mitattiin universaalilla verifiointimetodologialla (UVM) tehdyssä ympäristössä. C++ varmennuksessa käytetyt testivektorit implementoitiin uudelleen UVM-ympäristössä, jotta RTL-kattavuuden lähtötaso olisi korkea ilman manuaalista RTL-analyysiä. C++-suunnitelma implementoitiin prediktorina (referenssimallina) UVM-testipenkkiin koodikattavuuden parantamiseksi.
Tapaustutkimuksen tulokset osoittavat, että määritelty HLV-vuo on toteutettavissa käytännössä. Vuota käyttämällä saavutetaan merkittäviä suorituskyky- ja tuottavuusetuja C++ -testiympäristössä verrattuna UVM-ympäristöön. 90.60% koodikattavuuden saavuttavien C++ testivektoreiden uudelleenimplementoiti UVM-ympäristössä tuotti 96.90% RTL-kattavuuden. C++-predictorin implementointi oli merkittävä työkalu RTL-kattavuustavoitteen saavuttamisessa
Functional Verification of Digital Systems Using Meta-Heuristic Algorithms
Trends in technological developments, such as autonomous vehicles, home automation, connected cars, IoT, etc., are based on integrated systems or application-specific integrated circuits with high capacities, where these systems require even more complex devices. Thus, new techniques to design more secure systems in a short time in the market are needed. At this point, verification is one of the highest costs in the manufacturing stage and most expensive in the design process. To reduce the time and cost of the verification process, artificial intelligence techniques based on the optimization of the coverage of behavioral areas have been proposed. In this chapter, we will describe the main techniques used in the functional verification of digital systems of medium complexity, focusing especially on meta-heuristic algorithms such as particle swarm optimization, genetic algorithms, and so on. Several results are presented and compared, where the opportunity areas will be described
Driving the Network-on-Chip Revolution to Remove the Interconnect Bottleneck in Nanoscale Multi-Processor Systems-on-Chip
The sustained demand for faster, more powerful chips has been met by the
availability of chip manufacturing processes allowing for the integration of increasing
numbers of computation units onto a single die. The resulting outcome,
especially in the embedded domain, has often been called SYSTEM-ON-CHIP
(SoC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MP-SoC).
MPSoC design brings to the foreground a large number of challenges, one of
the most prominent of which is the design of the chip interconnection. With a
number of on-chip blocks presently ranging in the tens, and quickly approaching
the hundreds, the novel issue of how to best provide on-chip communication
resources is clearly felt.
NETWORKS-ON-CHIPS (NoCs) are the most comprehensive and scalable
answer to this design concern. By bringing large-scale networking concepts to
the on-chip domain, they guarantee a structured answer to present and future
communication requirements. The point-to-point connection and packet switching
paradigms they involve are also of great help in minimizing wiring overhead
and physical routing issues. However, as with any technology of recent inception,
NoC design is still an evolving discipline. Several main areas of interest
require deep investigation for NoCs to become viable solutions:
• The design of the NoC architecture needs to strike the best tradeoff among
performance, features and the tight area and power constraints of the onchip
domain.
• Simulation and verification infrastructure must be put in place to explore,
validate and optimize the NoC performance.
• NoCs offer a huge design space, thanks to their extreme customizability in
terms of topology and architectural parameters. Design tools are needed
to prune this space and pick the best solutions.
• Even more so given their global, distributed nature, it is essential to evaluate
the physical implementation of NoCs to evaluate their suitability for
next-generation designs and their area and power costs.
This dissertation performs a design space exploration of network-on-chip architectures,
in order to point-out the trade-offs associated with the design of
each individual network building blocks and with the design of network topology
overall. The design space exploration is preceded by a comparative analysis
of state-of-the-art interconnect fabrics with themselves and with early networkon-
chip prototypes. The ultimate objective is to point out the key advantages
that NoC realizations provide with respect to state-of-the-art communication
infrastructures and to point out the challenges that lie ahead in order to make
this new interconnect technology come true. Among these latter, technologyrelated
challenges are emerging that call for dedicated design techniques at all
levels of the design hierarchy. In particular, leakage power dissipation, containment
of process variations and of their effects. The achievement of the above
objectives was enabled by means of a NoC simulation environment for cycleaccurate
modelling and simulation and by means of a back-end facility for the
study of NoC physical implementation effects. Overall, all the results provided
by this work have been validated on actual silicon layout
- …