10 research outputs found
An approach to Measure Transition Density of Binary Sequences for X-filling based Test Pattern Generator in Scan based Design
Switching activity and Transition density computation is an essential stage for dynamic power estimation and testing time reduction. The study of switching activity, transition densities and weighted switching activities of pseudo random binary sequences generated by Linear Feedback shift registers and Feed Forward shift registers plays a crucial role in design approaches of Built-In Self Test, cryptosystems, secure scan designs and other applications. This paper proposed an approach to find transition densities, which plays an important role in choosing of test pattern generator We have analyze conventional and proposed designs using our approache, This work also describes the testing time of benchmark circuits. The outcome of this paper is presented in the form of algorithm, theorems with proofs and analyses table which strongly support the same. The proposed algorithm reduces switching activity and testing time up to 51.56% and 84.61% respectively
Fault-Independent Test-Generation for Software-Based Self-Testing
Software-based self-test (SBST) is being widely used in both manufacturing and in-the-field testing of processor-based devices and Systems-on-Chips. Unfortunately, the stuck-at fault model is increasingly inadequate to match the new and different types of defects in the most recent semiconductor technologies, while the explicit and separate targeting of every fault model in SBST is cumbersome due to the high complexity of the test-generation process, the lack of automation tools, and the high CPU-intensity of the fault-simulation process. Moreover, defects in advanced semiconductor technologies are not always covered by the most commonly used fault-models, and the probability of defect-escapes increases even more. To overcome these shortcomings we propose the first fault-independent SBST method. The proposed method is almost fully automated, it offers high coverage of non-modeled faults by means of a novel SBST-oriented probabilistic metric, and it is very fast as it omits the time-consuming test-generation/fault-simulation processes. Extensive experiments on the OpenRISC OR1200 processor show the advantages of the proposed method
Fault-Independent Test-Generation for Software-Based Self-Testing
Software-based self-test (SBST) is being widely used in both manufacturing and in-the-field testing of processor-based devices and Systems-on-Chips. Unfortunately, the stuck-at fault model is increasingly inadequate to match the new and different types of defects in the most recent semiconductor technologies, while the explicit and separate targeting of every fault model in SBST is cumbersome due to the high complexity of the test-generation process, the lack of automation tools, and the high CPU-intensity of the fault-simulation process. Moreover, defects in advanced semiconductor technologies are not always covered by the most commonly used fault-models, and the probability of defect-escapes increases even more. To overcome these shortcomings we propose the first fault-independent SBST method. The proposed method is almost fully automated, it offers high coverage of non-modeled faults by means of a novel SBST-oriented probabilistic metric, and it is very fast as it omits the time-consuming test-generation/fault-simulation processes. Extensive experiments on the OpenRISC OR1200 processor show the advantages of the proposed method
Fault-Independent Test-Generation for Software-Based Self-Testing
Software-based self-test (SBST) is being widely used
in both manufacturing and in-the-field testing of processor-based
devices and Systems-on-Chips. Unfortunately, the stuck-at fault
model is increasingly inadequate to match the new and different
types of defects in the most recent semiconductor technologies,
while the explicit and separate targeting of every fault model
in SBST is cumbersome due to the high complexity of the
test-generation process, the lack of automation tools, and the
high CPU-intensity of the fault-simulation process. Moreover,
defects in advanced semiconductor technologies are not always
covered by the most commonly used fault-models, and the
probability of defect-escapes increases even more. To overcome
these shortcomings we propose the first fault-independent method
for generating software-based self-test procedures. The proposed
method is almost fully automated, it offers high coverage of non-
modeled faults by means of a novel SBST-oriented probabilistic
metric, and it is very fast as it omits the time-consuming test-
generation/fault-simulation processes. Extensive experiments on
the OpenRISC OR1200 processor show the advantages of the
proposed method
Recommended from our members
Efficient verification/testing of system-on-chip through fault grading and analog behavioral modeling
textThis dissertation presents several cost-effective production test solutions using fault grading and mixed-signal design verification cases enabled by analog behavioral modeling. Although the latest System-on-Chip (SOC) is getting denser, faster, and more complex, the manufacturing technology is dominated by subtle defects that are introduced by small-scale technology. Thus, SOC requires more mature testing strategies. By performing various types of testing, better quality SoC can be manufactured, but test resources are too limited to accommodate all those tests. To create the most efficient production test flow, any redundant or ineffective tests need to be removed or minimized.
Chapter 3 proposes new method of test data volume reduction by combining the nonlinear property of feedback shift register (FSR) and dictionary coding. Instead of using the nonlinear FSR for actual hardware implementation, the expanded test set by nonlinear expansion is used as the one-column test sets and provides big reduction ratio for the test data volume. The experimental results show the combined method reduced the total test data volume and increased the fault coverage. Due to the increased number of test patterns, total test time is increased.
Chapter 4 addresses a whole process of functional fault grading. Fault grading has always been a ”desire-to-have” flow because it can bring up significant value for cost saving and yield analysis. However, it is very hard to perform the fault grading on the complex large scale SOC. A commercial tool called Z01X is used as a fault grading platform, and whole fault grading process is coordinated and each detailed execution is performed. Simulation- based functional fault grading identifies the quality of the given functional tests against the static faults and transition delay faults. With the structural tests and functional tests, functional fault grading can indicate the way to achieve the same test coverage by spending minimal test time. Compared to the consumed time and resource for fault grading, the contribution to the test time saving might not be acceptable as very promising, but the fault grading data can be reused for yield analysis and test flow optimization. For the final production testing, confident decisions on the functional test selection can be made based on the fault grading results.
Chapter 5 addresses the challenges of Package-on-Package (POP) testing. Because POP devices have pins on both the top and the bottom of the package, the increased test pins require more test channels to detect packaging defects. Boundary scan chain testing is used to detect those continuity defects by relying on leakage current from the power supply. This proposed test scheme does not require direct test channels on the top pins. Based on the counting algorithm, minimal numbers of test cycles are generated, and the test achieved full test coverage for any combinations of pin-to-pin shortage defects on the top pins of the POP package. The experimental results show about 10 times increased leakage current from the shorted defect. Also, it can be expanded to multi-site testing with less test channels for high-volume production.
Fault grading is applied within different structural test categories in Chapter 6. Stuck-at faults can be considered as TDFs having infinite delay. Hence, the TDF Automatic Test Pattern Generation (ATPG) tests can detect both TDFs and stuck-at faults. By removing the stuck-at faults being detected by the given TDF ATPG tests, the tests that target stuck-at faults can be reduced, and the reduced stuck-at fault set results in fewer stuck-at ATPG patterns. The structural test time is reduced while keeping the same test coverage. This TDF grading is performed with the same ATPG tool used to generate the stuck-at and TDF ATPG tests.
To expedite the mixed-signal design verification of complex SoC, analog behavioral modeling methods and strategies are addressed in Chapter 7 and case studies for detailed verification with actual mixed-signal design are ad- dressed in Chapter 8. Analog modeling effort can enhance verification quality for a mixed-signal design with less turnaround time, and it enables compatible integration of the mixed-signal design cores into the SoC. The modeling process may reveal any potential design errors or incorrect testbench setup, and it results in minimizing unnecessary debugging time for quality devices.
Two mixed-signal design cases were verified by me using the analog models. A fully hierarchical digital-to-analog converter (DAC) model is implemented and silicon mismatches caused by process variation are modeled and inserted into the DAC model, and the calibration algorithm for the DAC is successfully verified by model-based simulation at the full DAC-level. When the mismatch amount is increased and exceeded the calibration capability of the DAC, the simulation results show the increased calibration error with some outliers. This verification method can identify the saturation range of the DAC and predict the yield of the devices from process variation.
A phase-locked loop (PLL) design cases were also verified by me using the analog model. Both open-loop PLL model and closed-loop PLL model cases are presented. Quick bring-up of open-loop PLL model provides low simulation overhead for widely-used PLLs in the SOC and enables early starting of design verification for the upper-level design using the PLL generated clocks. Accurate closed-loop PLL model is implemented for DCO-based PLL design, and the mixed-simulation with analog models and schematic designs enables flexible analog verification. Only focused analog design block is set to the schematic design and the rest of the analog design is replaced by the analog model. Then, this scaled-down SPICE simulation is performed about 10 times to 100 times faster than full-scale SPICE simulation. The analog model of the focused block is compared with the scaled-down SPICE simulation result and the quality of the model is iteratively enhanced. Hence, the analog model enables both compatible integration and flexible analog design verification.
This dissertation contributes to reduce test time and to enhance test quality, and helps to set up efficient production testing flows. Depending on the size and performance of CUT, proper testing schemes can maximize the efficiency of production testing. The topics covered in this dissertation can be used in optimizing the test flow and selecting the final production tests to achieve maximum test capability. In addition, the strategies and benefits of analog behavioral modeling techniques that I implemented are presented, and actual verification cases shows the effectiveness of analog modeling for better quality SoC products.Electrical and Computer Engineerin
Towards an embedded board-level tester: study of a configurable test processor
The demand for electronic systems with more features, higher performance, and less power consumption increases continuously. This is a real challenge for design and test engineers because they have to deal with electronic systems with ever-increasing complexity maintaining production and test costs low and meeting critical time to market deadlines. For a test engineer working at the board-level, this means that manufacturing defects must be detected as soon as possible and at a low cost. However, the use of classical test techniques for testing modern printed circuit boards is not sufficient, and in the worst case these techniques cannot be used at all. This is mainly due to modern packaging technologies, a high device density, and high operation frequencies of modern printed circuit boards. This leads to very long test times, low fault coverage, and high test costs. This dissertation addresses these issues and proposes an FPGA-based test approach for printed circuit boards. The concept is based on a configurable test processor that is temporarily implemented in the on-board FPGA and provides the corresponding mechanisms to communicate to external test equipment and co-processors implemented in the FPGA. This embedded test approach provides the flexibility to implement test functions either in the external test equipment or in the FPGA. In this manner, tests are executed at-speed increasing the fault coverage, test times are reduced, and the test system can be adapted automatically to the properties of the FPGA and devices located on the board. An essential part of the FPGA-based test approach deals with the development of a test processor. In this dissertation the required properties of the processor are discussed, and it is shown that the adaptation to the specific test scenario plays a very important role for the optimization. For this purpose, the test processor is equipped with configuration parameters at the instruction set architecture and microarchitecture level. Additionally, an automatic generation process for the test system and for the computation of some of the processor’s configuration parameters is proposed. The automatic generation process uses as input a model known as the device under test model (DUT-M). In order to evaluate the entire FPGA-based test approach and the viability of a processor for testing printed circuit boards, the developed test system is used to test interconnections to two different devices: a static random memory (SRAM) and a liquid crystal display (LCD). Experiments were conducted in order to determine the resource utilization of the processor and FPGA-based test system and to measure test time when different test functions are implemented in the external test equipment or the FPGA. It has been shown that the introduced approach is suitable to test printed circuit boards and that the test processor represents a realistic alternative for testing at board-level.Der Bedarf an elektronischen Systemen mit zusätzlichen Merkmalen, höherer Leistung und geringerem Energieverbrauch nimmt ständig zu. Dies stellt eine erhebliche Herausforderung für Entwicklungs- und Testingenieure dar, weil sie sich mit elektronischen Systemen mit einer steigenden Komplexität zu befassen haben. Außerdem müssen die Herstellungs- und Testkosten gering bleiben und die Produkteinführungsfristen so kurz wie möglich gehalten werden. Daraus folgt, dass ein Testingenieur, der auf Leiterplatten-Ebene arbeitet, die Herstellungsfehler so früh wie möglich entdecken und dabei möglichst niedrige Kosten verursachen soll. Allerdings sind die klassischen Testmethoden nicht in der Lage, die Anforderungen von modernen Leiterplatten zu erfüllen und im schlimmsten Fall können diese Testmethoden überhaupt nicht verwendet werden. Dies liegt vor allem an modernen Gehäuse-Technologien, der hohen Bauteildichte und den hohen Arbeitsfrequenzen von modernen Leiterplatten. Das führt zu sehr langen Testzeiten, geringer Testabdeckung und hohen Testkosten. Die Dissertation greift diese Problematik auf und liefert einen FPGA-basierten Testansatz für Leiterplatten. Das Konzept beruht auf einem konfigurierbaren Testprozessor, welcher im On-Board-FPGA temporär implementiert wird und die entsprechenden Mechanismen für die Kommunikation mit der externen Testeinrichtung und Co-Prozessoren im FPGA bereitstellt. Dadurch ist es möglich Testfunktionen flexibel entweder auf der externen Testeinrichtung oder auf dem FPGA zu implementieren. Auf diese Weise werden Tests at-speed ausgeführt, um die Testabdeckung zu erhöhen. Außerdem wird die Testzeit verkürzt und das Testsystem automatisch an die Eigenschaften des FPGAs und anderer Bauteile auf der Leiterplatte angepasst. Ein wesentlicher Teil des FPGA-basierten Testansatzes umfasst die Entwicklung eines Testprozessors. In dieser Dissertation wird über die benötigten Eigenschaften des Prozessors diskutiert und es wird gezeigt, dass die Anpassung des Prozessors an den spezifischen Testfall von großer Bedeutung für die Optimierung ist. Zu diesem Zweck wird der Prozessor mit Konfigurationsparametern auf der Befehlssatzarchitektur-Ebene und Mikroarchitektur-Ebene ausgerüstet. Außerdem wird ein automatischer Generierungsprozess für die Realisierung des Testsystems und für die Berechnung einer Untergruppe von Konfigurationsparametern des Prozessors vorgestellt. Der automatische Generierungsprozess benutzt als Eingangsinformation ein Modell des Prüflings (device under test model, DUT-M). Das entwickelte Testsystem wurde zum Testen von Leiterplatten für Verbindungen zwischen dem FPGA und zwei Bauteilen verwendet, um den FPGA-basierten Testansatz und die Durchführbarkeit des Testprozessors für das Testen auf Leiterplatte-Ebene zu evaluieren. Die zwei Bauteile sind ein Speicher mit direktem Zugriff (static random-access memory, SRAM) und eine Flüssigkristallanzeige (liquid crystal display, LCD). Die Experimente wurden durchgeführt, um den Ressourcenverbrauch des Prozessors und Testsystems festzustellen und um die Testzeit zu messen. Dies geschah durch die Implementierung von unterschiedlichen Testfunktionen auf der externen Testeinrichtung und dem FPGA. Dadurch konnte gezeigt werden, dass der FPGA-basierte Ansatz für das Testen von Leiterplatten geeignet ist und dass der Testprozessor eine realistische Alternative für das Testen auf Leiterplatten-Ebene ist
Enabling low cost test and tuning of difficult-to-measure device specifications: application to DC-DC converters and high speed devices
Low-cost test and tuning methods for difficult-to-measure specifications are presented in this research from the following perspectives: 1)"Safe" test and self-tuning for power converters: To avoid the risk of device under test (DUT) damage during conventional load/line regulation measurement on power converter, a "safe" alternate test structure is developed where the power converter (boost/buck converter) is placed in a different mode of operation during alternative test (light switching load) as opposed to standard test (heavy switching load) to prevent damage to the DUT during manufacturing test. Based on the alternative test structure, self-tuning methods for both boost and buck converters are also developed in this thesis. In addition, to make these test structures suitable for on-chip built-in self-test (BIST) application, a special sensing circuit has been designed and implemented. Stability analysis filters and appropriate models are also implemented to predict the DUT’s electrical stability condition during test and to further predict the values of tuning knobs needed for the tuning process. 2) High bandwidth RF signal generation: Up-convertion has been widely used in high frequency RF signal generation but mixer nonlinearity results in signal distortion that is difficult to eliminate with such methods. To address this problem, a framework for low-cost high-fidelity wideband RF signal generation is developed in this thesis. Depending on the band-limited target waveform, the input data for two interleaved DACs (digital-to-analog converters) system is optimized by a matrix-model-based algorithm in such a way that it minimizes the distortion between one of its image replicas in the frequency domain and the target RF waveform within a specified signal bandwidth. The approach is used to demonstrate how interferers with specified frequency characteristics can be synthesized at low cost for interference testing of RF communications systems. The frameworks presented in this thesis have a significant impact in enabling low-cost test and tuning of difficult-to-measure device specifications for power converter and high-speed devices.Ph.D
Actas de la XIII Reunión Española sobre Criptología y Seguridad de la Información RECSI XIII : Alicante, 2-5 de septiembre de 2014
Si tuviéramos que elegir un conjunto de palabras clave para definir la sociedad actual, sin duda el término información sería uno de los más representativos. Vivimos en un mundo caracterizado por un continuo flujo de información en el que las Tecnologías de la Información y Comunicación (TIC) y las Redes Sociales desempeñan un papel relevante. En la Sociedad de la Información se generan gran variedad de datos en formato digital, siendo la protección de los mismos frente a accesos y usos no autorizados el objetivo principal de lo que conocemos como Seguridad de la Información. Si bien la Criptología es una herramienta tecnológica básica, dedicada al desarrollo y análisis de sistemas y protocolos que garanticen la seguridad de los datos, el espectro de tecnologías que intervienen en la protección de la información es amplio y abarca diferentes disciplinas. Una de las características de esta ciencia es su rápida y constante evolución, motivada en parte por los continuos avances que se producen en el terreno de la computación, especialmente en las últimas décadas. Sistemas, protocolos y herramientas en general considerados seguros en la actualidad dejarán de serlo en un futuro más o menos cercano, lo que hace imprescindible el desarrollo de nuevas herramientas que garanticen, de forma eficiente, los necesarios niveles de seguridad. La Reunión Española sobre Criptología y Seguridad de la Información (RECSI) es el congreso científico español de referencia en el ámbito de la Criptología y la Seguridad en las TIC, en el que se dan cita periódicamente los principales investigadores españoles y de otras nacionalidades en esta disciplina, con el fin de compartir los resultados más recientes de su investigación. Del 2 al 5 de septiembre de 2014 se celebrará la decimotercera edición en la ciudad de Alicante, organizada por el grupo de Criptología y Seguridad Computacional de la Universidad de Alicante. Las anteriores ediciones tuvieron lugar en Palma de Mallorca (1991), Madrid (1992), Barcelona (1994), Valladolid (1996), Torremolinos (1998), Santa Cruz de Tenerife (2000), Oviedo (2002), Leganés (2004), Barcelona (2006), Salamanca (2008), Tarragona (2010) y San Sebastián (2012)