167 research outputs found
Simulating the effects of logic faults in implementation-level VITAL-compliant models
[EN] Simulation-based fault injection is a well-known technique to assess the dependability of hardware designs specified using hardware description languages (HDL). Although logic faults are usually introduced in models defined at the register transfer level (RTL), most accurate results can be obtained by considering implementation-level ones, which reflect the actual structure and timing of the circuit. These models consist of a list of interconnected technology-specific components (macrocells), provided by vendors and annotated with post-place-and-route delays. Macrocells described in the very high speed integrated circuit HDL (VHDL) should also comply with the VHDL initiative towards application specific integrated circuit libraries (VITAL) standard to be interoperable across standard simulators. However, the rigid architecture imposed by VITAL makes that fault injection procedures applied at RTL cannot be used straightforwardly. This work identifies a set of generic operations on VITAL-compliant macrocells that are later used to define how to accurately simulate the effects of common logic fault models. The generality of this proposal is supported by the definition of a platform-specific fault procedure based on these operations. Three embedded processors, implemented using the XilinxÂżs toolchain and SIMPRIM library of macrocells, are considered as a case study, which exposes the gap existing between the robustness assessment at both RTL and implementation-level.This work has been partially funded by the Ministerio de Economia, Industria y Competitividad of Spain under grant agreement no TIN2016-81075-R, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) of Universitat Politecnica de Valencia.Tuzov, I.; De-AndrĂ©s-MartĂnez, D.; Ruiz, JC. (2019). Simulating the effects of logic faults in implementation-level VITAL-compliant models. Computing. 101(2):77-96. https://doi.org/10.1007/s00607-018-0651-4S77961012Baraza JC, Gracia J, Blanc S, Gil D, Gil P (2008) Enhancement of fault injection techniques based on the modification of vhdl code. IEEE Tran Very Large Scale Integr Syst 16:693–706Baraza JC, Gracia J, Gil D, Gil P (2002) A prototype of a vhdl-based fault injection tool: description and application. Journal of Systems Architecture 47(10):847–867Benites LAC, Kastensmidt FL (2017) Fault injection methodology for single event effects on clock-gated asics. In: IEEE Latin American test symposium. IEEE, pp 1–4Benso A, Prinetto P (2003) Fault injection techniques and tools for VLSI reliability evaluation. Frontiers in electronic testing. Kluwer Academic Publishers, BerlinCobham Gaisler AB: LEON3 processor product sheet (2016). https://www.gaisler.com/doc/leon3_product_sheet.pdfCohen B (2012) VHDL coding styles and methodologies. Springer, New YorkDas SR, Mukherjee S, Petriu EM, Assaf MH, Sahinoglu M, Jone WB (2006) An improved fault simulation approach based on verilog with application to ISCAS benchmark circuits. In: IEEE instrumentation and measurement technology conference, pp 1902–1907Fernandez V, Sanchez P, Garcia M, Villar E (1994) Fault modeling and injection in VITAL descriptions. In: Third annual Atlantic test workshop, pp o1–o4Gil D, Gracia J, Baraza JC, Gil P (2003) Study, comparison and application of different vhdl-based fault injection techniques for the experimental validation of a fault-tolerant system. J Syst Archit 34(1):41–51Gil P, Arlat J, Madeira H, Crouzet Y, Jarboui T, Kanoun K, Marteau T, Duraes J, Vieira M, Gil D, Baraza JC, Gracia J (2002) Fault representativeness. Technical report, dependability benchmarking projectGuthaus MR, Ringenberg JS, Ernst D, Austin TM, Mudge T, Brown RB (2001) MiBench: a free, commercially representative embedded benchmark suite. In: IEEE 4th annual workshop on workload characterization, pp 3–14IEEE Standard for VITAL ASIC (Application Specific Integrated Circuit) (2000) Modeling specification. Institute of Electrical and Electronic Engineers, StandardIEEE Standard VHDL Language Reference Manual (2008) Institute of Electrical and Electronic Engineers, StandardIEEE Standard for Standard Delay Format (SDF) for the Electronic Design Process. Institute of Electrical and Electronic Engineers, Standard (2001)Jenn E, Arlat J, Rimen M, Ohlsson J, Karlsson J (1994) Fault injection into VHDL models: the MEFISTO tool. In: International symposium on fault-tolerant computing, pp 66–75Kochte MA, Schaal M, Wunderlich HJ, Zoellin CG (2010) Efficient fault simulation on many-core processors. In: Design automation conference, pp 380–385Mansour W, Velazco R (2013) An automated seu fault-injection method and tool for HDL-based designs. IEEE Trans Nucl Sci 60(4):2728–2733Mentor Graphics (2016) Questa SIM command reference manual 10.7b, Document Revision 3.5. https://www.mentor.com/products/fv/modelsim/Munden R (2000) Inverter, STDN library. Free model foundry VHDL model list. https://freemodelfoundry.com/fmf_models/stnd/std04.vhdMunden R (2004) ASIC and FPGA verification: a guide to component modeling. Systems on silicon. Elsevier, AmsterdamNa J, Lee D (2011) Simulated fault injection using simulator modification technique. ETRI J 33(1):50–59Nimara S, Amaricai A, Popa M (2015) Sub-threshold cmos circuits reliability assessment using simulated fault injection based on simulator commands. In: IEEE International Symposium on Applied Computational Intelligence and Informatics, pp 101–104Oregano Systems GmbH (2013) MC8051 IP Core, user guide (V 1.2) 2013. http://www.oreganosystems.at/download/mc8051_ug.pdfRomani E (1998) Structural PIC165X microcontroller. Hamburg VHDL archive. https://tams-www.informatik.uni-hamburg.de/vhdlShaw D, Al-Khalili D, Rozon C (2006) Automatic generation of defect injectable VHDL fault models for ASIC standard cell libraries. Integr VLSI J 39(4):382–406Shaw DB, Al-Khalili D (2003) IC bridge fault modeling for IP blocks using neural network-based VHDL saboteurs. IEEE Trans Comput 10:1285–1297Short KL (2008) VHDL for engineers, 1st edn. Pearson, LondonSieh V, Tschache O, Balbach F (1997) Verify: evaluation of reliability using VHDL-models with embedded fault descriptions. In: International symposium on fault-tolerant computing, pp 32–36Singh L, Drucker L (2004) Advanced verification techniques. Frontiers in electronic testing. Springer, New YorkTuzov I, de AndrĂ©s D, Ruiz JC (2017) Dependability-aware design space exploration for optimal synthesis parameters tuning. In: IEEE/IFIP international conference on dependable systems and networks, pp 1–12Tuzov I, de AndrĂ©s D, Ruiz JC (2017) Robustness assessment via simulation-based fault injection of the implementation level models of the LEON3, MC8051, and PIC microcontrollers in presence of stuck-at, bit-flip, pulse, and delay fault models [Data set], Zenodo. https://doi.org/10.5281/zenodo.891316Tuzov I, de AndrĂ©s D, Ruiz JC (2018) DAVOS: EDA toolkit for dependability assessment, verification, optimization and selection of hardware models. In: IEEE/IFIP international conference on dependable systems and networks, pp 322–329Tuzov I, Ruiz JC, de AndrĂ©s D (2017) Accurately simulating the effects of faults in VHDL models described at the implementation-level. In: European dependable computing conference, pp 10–17Wang LT, Chang YW, Cheng KT (2009) Electronic design automation: synthesis, verification, and test. Morgan Kaufmann, BurlingtonXilinx: Synthesis and simulation design guide, UG626 (v14.4) (2012). https://www.xilinx.com/support/documentation/sw_manuals/xilinx14_7/sim.pd
Dynamic Partial Reconfiguration for Dependable Systems
Moore’s law has served as goal and motivation for consumer electronics manufacturers in the last decades. The results in terms of processing power increase in the consumer electronics devices have been mainly achieved due to cost reduction and technology shrinking. However, reducing physical geometries mainly affects the electronic devices’ dependability, making them more sensitive to soft-errors like Single Event Transient (SET) of Single Event Upset (SEU) and hard (permanent) faults, e.g. due to aging effects.
Accordingly, safety critical systems often rely on the adoption of old technology nodes, even if they introduce longer design time w.r.t. consumer electronics. In fact, functional safety requirements are increasingly pushing industry in developing innovative methodologies to design high-dependable systems with the required diagnostic coverage. On the other hand commercial off-the-shelf (COTS) devices adoption began to be considered for safety-related systems due to real-time requirements, the need for the implementation of computationally hungry algorithms and lower design costs. In this field FPGA market share is constantly increased, thanks to their flexibility and low non-recurrent engineering costs, making them suitable for a set of safety critical applications with low production volumes.
The works presented in this thesis tries to face new dependability issues in modern reconfigurable systems, exploiting their special features to take proper counteractions with low impacton performances, namely Dynamic Partial Reconfiguration
On Borrowed Time -- Preventing Static Power Side-Channel Analysis
In recent years, static power side-channel analysis attacks have emerged as a
serious threat to cryptographic implementations, overcoming state-of-the-art
countermeasures against side-channel attacks. The continued down-scaling of
semiconductor process technology, which results in an increase of the relative
weight of static power in the total power budget of circuits, will only improve
the viability of static power side-channel analysis attacks. Yet, despite the
threat posed, limited work has been invested into mitigating this class of
attack. In this work we address this gap. We observe that static power
side-channel analysis relies on stopping the target circuit's clock over a
prolonged period, during which the circuit holds secret information in its
registers. We propose Borrowed Time, a countermeasure that hinders an
attacker's ability to leverage such clock control. Borrowed Time detects a
stopped clock and triggers a reset that wipes any registers containing
sensitive intermediates, whose leakages would otherwise be exploitable. We
demonstrate the effectiveness of our countermeasure by performing practical
Correlation Power Analysis attacks under optimal conditions against an AES
implementation on an FPGA target with and without our countermeasure in place.
In the unprotected case, we can recover the entire secret key using traces from
1,500 encryptions. Under the same conditions, the protected implementation
successfully prevents key recovery even with traces from 1,000,000 encryptions
Hardware, Software and Data Analysis Techniques for SRAM-Based Field Programmable Gate Array Circuits
This work presents a built, tested, and demonstrated test structure that is low-cost, flexible, and re-usable for robust radiation experimentation, primarily to investigate memory, in this case SRAMs and SRAM-based FPGAs. The space environment can induce many kinds of failures due to radiation effects. These failures result in a loss of money, time, intelligence, and information. In order to evaluate technologies for potential failures, a detailed test methodology and associated structure are required. In this solution, an FPGA board was used as the controller platform, with multiple VHDL circuit controllers, data collection and reporting modules. The structure was demonstrated by programming an SRAM-based FPGA board as the device under test (DUT) with various types of adders, counters and RAM modules. The controllers, hardware, and data collection operations were tested and validated using gamma radiation from a Co-60 source at the Ohio State University Nuclear Reactor to irradiate the DUT. The test structure is easily modified to allow for a broad range of experiments on the same DUT. In addition, this structure is easily adaptable for other memory types, such as DRAM, FlashRam, and MRAM. These additions will be discussed further in this document. The system fits in a backpack and costs less than $1000
A design concept for radiation hardened RADFET readout system for space applications
Instruments for measuring the absorbed dose and dose rate under radiation exposure, known as radiation dosimeters, are indispensable in space missions. They are composed of radiation sensors that generate current or voltage response when exposed to ionizing radiation, and processing electronics for computing the absorbed dose and dose rate. Among a wide range of existing radiation sensors, the Radiation Sensitive Field Effect Transistors (RADFETs) have unique advantages for absorbed dose measurement, and a proven record of successful exploitation in space missions. It has been shown that the RADFETs may be also used for the dose rate monitoring. In that regard, we propose a unique design concept that supports the simultaneous operation of a single RADFET as absorbed dose and dose rate monitor. This enables to reduce the cost of implementation, since the need for other types of radiation sensors can be minimized or eliminated. For processing the RADFET's response we propose a readout system composed of analog signal conditioner (ASC) and a self-adaptive multiprocessing system-on-chip (MPSoC). The soft error rate of MPSoC is monitored in real time with embedded sensors, allowing the autonomous switching between three operating modes (high-performance, de-stress and fault-tolerant), according to the application requirements and radiation conditions
The response of fuzzy electronics to ionizing radiation
Small satellites such as CubeSats operate under environmental constraints that are outside of typical commercial specifications. Such constraints include the ability to operate over an extended temperature range and during exposure to ionizing radiation. Nevertheless, commercial technologies are being implemented in CubeSat spacecraft because of the low-cost, low-power, and space savings requirements often achievable with advanced microelectronics [1]. Due to flexibility and the ability to handle uncertainty, fuzzy logic is viable for satellite control while meeting the strict design requirements of a CubeSat. This work evaluates the response of fuzzy control logic to ionizing radiation and compares the response to that of conventional systems. Fuzzy logic operates on multiple truth values which vary within the range of 0 and 1, as opposed to Boolean logic’s precise, two-variable system. Fuzzy systems utilize “if-then” statements, known as membership functions. These allow for terms such as “moderately” or “slightly,” to be utilized, permitting flexibility within the system. As such, fuzzy logic shows promise in robotics and mechanical control systems due to the ability to handle uncertainty and non-linearity. Thus, fuzzy logic electronics are a candidate for small satellite control mechanisms, creating the potential for radiation hardened control systems that take advantage of the low-power and space savings achievable by modern electronics technologies. A common effect of ionizing radiation is single event effects (SEEs). SEEs generally result in erroneous transient behavior following the interaction of single ionizing particles with semiconductors. Little is known about the response of fuzzy logic systems to such effects. This work aims to evaluate the effects of SEE on a fuzzy logic small satellite attitude controller, describe the mechanisms of vulnerability, and compares the response to standard controller designs
Timing speculation and adaptive reliable overclocking techniques for aggressive computer systems
Computers have changed our lives beyond our own imagination in the past several decades. The continued and progressive advancements in VLSI technology and numerous micro-architectural innovations have played a key role in the design of spectacular low-cost high performance computing systems that have become omnipresent in today\u27s technology driven world. Performance and dependability have become key concerns as these ubiquitous computing machines continue to drive our everyday life. Every application has unique demands, as they run in diverse operating environments. Dependable, aggressive and adaptive systems improve efficiency in terms of speed, reliability and energy consumption.
Traditional computing systems run at a fixed clock frequency, which is determined by taking into account the worst-case timing paths, operating conditions, and process variations. Timing speculation based reliable overclocking advocates going beyond worst-case limits to achieve best performance while not avoiding, but detecting and correcting a modest number of timing errors. The success of this design methodology relies on the fact that timing critical paths are rarely exercised in a design, and typical execution happens much faster than the timing requirements dictated by worst-case design methodology. Better-than-worst-case design methodology is advocated by several recent research pursuits, which exploit dependability techniques to enhance computer system performance.
In this dissertation, we address different aspects of timing speculation based adaptive reliable overclocking schemes, and evaluate their role in the design of low-cost, high performance, energy efficient and dependable systems. We visualize various control knobs in the design that can be favorably controlled to ensure different design targets.
As part of this research, we extend the SPRIT3E, or Superscalar PeRformance Improvement Through Tolerating Timing Errors, framework, and characterize the extent of application dependent performance acceleration achievable in superscalar processors by scrutinizing the various parameters that impact the operation beyond worst-case limits. We study the limitations imposed by short-path constraints on our technique, and present ways to exploit them to maximize performance gains. We analyze the sensitivity of our technique\u27s adaptiveness by exploring the necessary hardware requirements for dynamic overclocking schemes. Experimental analysis based on SPEC2000 benchmarks running on a SimpleScalar Alpha processor simulator, augmented with error rate data obtained from hardware simulations of a superscalar processor, are presented.
Even though reliable overclocking guarantees functional correctness, it leads to higher power consumption. As a consequence, reliable overclocking without considering on-chip temperatures will bring down the lifetime reliability of the chip. In this thesis, we analyze how reliable overclocking impacts the on-chip temperature of a microprocessor and evaluate the effects of overheating, due to such reliable dynamic frequency tuning mechanisms, on the lifetime reliability of these systems. We then evaluate the effect of performing thermal throttling, a technique that clamps the on-chip temperature below a predefined value, on system performance and reliability. Our study shows that a reliably overclocked system with dynamic thermal management achieves 25% performance improvement, while lasting for 14 years when being operated within 353K.
Over the past five decades, technology scaling, as predicted by Moore\u27s law, has been the bedrock of semiconductor technology evolution. The continued downscaling of CMOS technology to deep sub-micron gate lengths has been the primary reason for its dominance in today\u27s omnipresent silicon microchips. Even as the transition to the next technology node is indispensable, the initial cost and time associated in doing so presents a non-level playing field for the competitors in the semiconductor business. As part of this thesis, we evaluate the capability of speculative reliable overclocking mechanisms to maximize performance at a given technology level. We evaluate its competitiveness when compared to technology scaling, in terms of performance, power consumption, energy and energy delay product. We present a comprehensive comparison for integer and floating point SPEC2000 benchmarks running on a simulated Alpha processor at three different technology nodes in normal and enhanced modes. Our results suggest that adopting reliable overclocking strategies will help skip a technology node altogether, or be competitive in the market, while porting to the next technology node.
Reliability has become a serious concern as systems embrace nanometer technologies. In this dissertation, we propose a novel fault tolerant aggressive system that combines soft error protection and timing error tolerance. We replicate both the pipeline registers and the pipeline stage combinational logic. The replicated logic receives its inputs from the primary pipeline registers while writing its output to the replicated pipeline registers. The organization of redundancy in the proposed Conjoined Pipeline system supports overclocking, provides concurrent error detection and recovery capability for soft errors, intermittent faults and timing errors, and flags permanent silicon defects. The fast recovery process requires no checkpointing and takes three cycles. Back annotated post-layout gate-level timing simulations, using 45nm technology, of a conjoined two-stage arithmetic pipeline and a conjoined five-stage DLX pipeline processor, with forwarding logic, show that our approach, even under a severe fault injection campaign, achieves near 100% fault coverage and an average performance improvement of about 20%, when dynamically overclocked
Observation mechanisms for in-field software-based self-test
When electronic systems are used in safety critical applications, as in the space,
avionic, automotive or biomedical areas, it is required to maintain a very low
probability of failures due to faults of any kind. Standards and regulations play
a significant role, forcing companies to devise and adopt solutions able to achieve
predefined targets in terms of dependability. Different techniques can be used to
reduce fault occurrence or to minimize the probability that those faults produce
critical failures (e.g., by introducing redundancy).
Unfortunately, most of these techniques have a severe impact on the cost of
the resulting product and, in some cases, the probability of failures is too large
anyway. Hence, a solution commonly used in several scenarios lies on periodically
performing a test able to detect the occurrence of any fault before it produces
a failure (in-field test). This solution is normally based on forcing the processor
inside the Device Under Test to execute a properly written test program, which is
able to activate possible faults and to make their effects visible in some observable
locations. This approach is also called Software-Based Self-Test, or SBST.
If compared with testing in an end of manufacturing scenario, in-field testing
has strong limitations in terms of access to the system inputs and outputs
because Design for Testability structures and testing equipment are usually not
available. As a consequence there are reduced possibilities to activate the faults
and to observe their effects.
This reduced observability particularly affects the ability to detect performance
faults, i.e. faults that modify the timing but not the final value of computations.
This kind of faults are hard to detect by only observing the final content of
predefined memory locations, that is the usual test result observation method used
in-field.
Initially, the present work was focused on fault tolerance techniques against
transient faults induced by ionizing radiation, the so called Single Event Upsets
(SEUs). The main contribution of this early stage of the thesis lies in the experimental
validation of the feasibility of achieving a safe system by using an
architecture that combines task-level redundancy with already available IP cores,
thus minimizing the development time. Task execution is replicated and Memory
Protection is used to guarantee that any SEU may affect one and only one
of the replicas. A proof of concept implementation was developed and validated
using fault injection. Results outline the effectiveness of the architecture, and the
overhead analysis shows that the proposed architecture is effective in reducing the
resource occupation with respect to N-modular redundancy, at an affordable cost
in terms of application execution time.
The main part of the thesis is focused on in-field software-based self-test of
permanent faults. A set of observation methods exploiting existing or ad-hoc
hardware is proposed, aimed at obtaining a better coverage, in particular of performance
faults. An extensive quantitative evaluation of the proposed methods
is presented, including a comparison with the observation methods traditionally
used in end of manufacturing and in-field testing.
Results show that the proposed methods are a good complement to the traditionally
used final memory content observation. Moreover, they show that an
adequate combination of these complementary methods allows for achieving nearly
the same fault coverage achieved when continuously observing all the processor
outputs, which is an observation method commonly used for production test but
usually not available in-field.
A very interesting by-product of what is described above is a detailed description
of how to compute the fault coverage achieved by functional in-field tests
using a conventional fault simulator, a tool that is usually applied in an end of
manufacturing testing scenario.
Finally, another relevant result in the testing area is a method to detect permanent
faults inside the cache coherence logic integrated in each cache controller
of a multi-core system, based on the concurrent execution of a test program by
the different cores in a coordinated manner. By construction, the method achieves
full fault coverage of the static faults in the addressed logic.Cuando se utilizan sistemas electrĂłnicos en aplicaciones crĂticas como en las áreas biomĂ©dica, aeroespacial o automotriz, se requiere mantener una muy baja probabilidad de malfuncionamientos debidos a cualquier tipo de fallas. Los estándares y normas juegan un papel importante, forzando a los desarrolladores a diseñar y adoptar soluciones que sean capaces de alcanzar objetivos predefinidos en cuanto a seguridad y confiabilidad. Pueden utilizarse diferentes tĂ©cnicas para reducir la ocurrencia de fallas o para minimizar la probabilidad de que esas fallas produzcan mal funcionamientos crĂticos, por ejemplo a travĂ©s de la incorporaciĂłn de redundancia. Lamentablemente, muchas de esas tĂ©cnicas afectan en gran medida el costo de los productos y, en algunos casos, la probabilidad de malfuncionamiento sigue siendo demasiado alta. En consecuencia, una soluciĂłn usada a menudo en varios escenarios consiste en realizar periĂłdicamente un test que sea capaz de detectar la ocurrencia de una falla antes de que esta produzca un mal funcionamiento (test en campo). En general, esta soluciĂłn se basa en forzar a un procesador existente dentro del dispositivo bajo prueba a ejecutar un programa de test que sea capaz de activar las posibles fallas y de hacer que sus efectos sean visibles en puntos observables. A esta metodologĂa tambiĂ©n se la llama auto-test basado en software, o en inglĂ©s Software-Based Self-Test (SBST). Si se lo compara con un escenario de test de fin de fabricaciĂłn, el test en campo tiene fuertes limitaciones en tĂ©rminos de posibilidad de acceso a las entradas y salidas del sistema, porque usualmente no se dispone de equipamiento de test ni de la infraestructura de Design for Testability. En consecuencia se tiene menos posibilidades de activar las fallas y de observar sus efectos. Esta observabilidad reducida afecta particularmente la habilidad para detectar fallas de performance, es decir fallas que modifican la temporizaciĂłn pero no el resultado final de los cálculos. Este tipo de fallas es difĂcil de detectar por la sola observaciĂłn del contenido final de lugares de memoria, que es el mĂ©todo usual que se utiliza para observar los resultados de un test en campo. Inicialmente, el presente trabajo estuvo enfocado en tĂ©cnicas para tolerar fallas transitorias inducidas por radiaciĂłn ionizante, llamadas en inglĂ©s Single Event Upsets (SEUs). La principal contribuciĂłn de esa etapa inicial de la tesis reside en la validaciĂłn experimental de la viabilidad de obtener un sistema seguro, utilizando una arquitectura que combina redundancia a nivel de tareas con el uso de mĂłdulos hardware (IP cores) ya disponibles, que minimiza en consecuencia el tiempo de desarrollo. Se replica la ejecuciĂłn de las tareas y se utiliza protecciĂłn de memoria para garantizar que un SEU pueda afectar a lo sumo a una sola de las rĂ©plicas. Se desarrollĂł una implementaciĂłn para prueba de concepto que fue validada mediante inyecciĂłn de fallas. Los resultados muestran la efectividad de la arquitectura, y el análisis de los recursos utilizados muestra que la arquitectura propuesta es efectiva en reducir la ocupaciĂłn con respecto a la redundancia modular con N rĂ©plicas, a un costo accesible en tĂ©rminos de tiempo de ejecuciĂłn. La parte principal de esta tesis se enfoca en el área de auto-test en campo basado en software para la detecciĂłn de fallas permanentes. Se propone un conjunto de mĂ©todos de observaciĂłn utilizando hardware existente o ad-hoc, con el fin de obtener una mejor cobertura, en particular de las fallas de performance. Se presenta una extensa evaluaciĂłn cuantitativa de los mĂ©todos propuestos, que incluye una comparaciĂłn con los mĂ©todos tradicionalmente utilizados en tests de fin de fabricaciĂłn y en campo. Los resultados muestran que los mĂ©todos propuestos son un buen complemento del mĂ©todo tradicionalmente usado que consiste en observar el valor final del contenido de memoria. Además muestran que una adecuada combinaciĂłn de estos mĂ©todos complementarios permite alcanzar casi los mismos valores de cobertura de fallas que se obtienen mediante la observaciĂłn continua de todas las salidas del procesador, mĂ©todo comĂşnmente usado en tests de fin de fabricaciĂłn, pero que usualmente no está disponible en campo. Un subproducto muy interesante de lo arriba expuesto es la descripciĂłn detallada del procedimiento para calcular la cobertura de fallas lograda mediante tests funcionales en campo por medio de un simulador de fallas convencional, una herramienta que usualmente se aplica en escenarios de test de fin de fabricaciĂłn. Finalmente, otro resultado relevante en el área de test es un mĂ©todo para detectar fallas permanentes dentro de la lĂłgica de coherencia de cache que está integrada en el controlador de cache de cada procesador en un sistema multi procesador. El mĂ©todo está basado en la ejecuciĂłn de un programa de test en forma coordinada por parte de los diferentes procesadores. Por construcciĂłn, el mĂ©todo cubre completamente las fallas de la lĂłgica mencionad
- …