31 research outputs found

    Cross layer reliability estimation for digital systems

    Get PDF
    Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost. One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern. Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults. For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability. This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains

    Study of the effects of SEU-induced faults on a pipeline protected microprocessor

    Get PDF
    This paper presents a detailed analysis of the behavior of a novel fault-tolerant 32-bit embedded CPU as compared to a default (non-fault-tolerant) implementation of the same processor during a fault injection campaign of single and double faults. The fault-tolerant processor tested is characterized by per-cycle voting of microarchitectural and the flop-based architectural states, redundancy at the pipeline level, and a distributed voting scheme. Its fault-tolerant behavior is characterized for three different workloads from the automotive application domain. The study proposes statistical methods for both the single and dual fault injection campaigns and demonstrates the fault-tolerant capability of both processors in terms of fault latencies, the probability of fault manifestation, and the behavior of latent faults

    New Techniques for On-line Testing and Fault Mitigation in GPUs

    Get PDF
    L'abstract 猫 presente nell'allegato / the abstract is in the attachmen

    Low-cost and efficient fault detection and diagnosis schemes for modern cores

    Get PDF
    Continuous improvements in transistor scaling together with microarchitectural advances have made possible the widespread adoption of high-performance processors across all market segments. However, the growing reliability threats induced by technology scaling and by the complexity of designs are challenging the production of cheap yet robust systems. Soft error trends are haunting, especially for combinational logic, and parity and ECC codes are therefore becoming insufficient as combinational logic turns into the dominant source of soft errors. Furthermore, experts are warning about the need to also address intermittent and permanent faults during processor runtime, as increasing temperatures and device variations will accelerate inherent aging phenomena. These challenges specially threaten the commodity segments, which impose requirements that existing fault tolerance mechanisms cannot offer. Current techniques based on redundant execution were devised in a time when high penalties were assumed for the sake of high reliability levels. Novel light-weight techniques are therefore needed to enable fault protection in the mass market segments. The complexity of designs is making post-silicon validation extremely expensive. Validation costs exceed design costs, and the number of discovered bugs is growing, both during validation and once products hit the market. Fault localization and diagnosis are the biggest bottlenecks, magnified by huge detection latencies, limited internal observability, and costly server farms to generate test outputs. This thesis explores two directions to address some of the critical challenges introduced by unreliable technologies and by the limitations of current validation approaches. We first explore mechanisms for comprehensively detecting multiple sources of failures in modern processors during their lifetime (including transient, intermittent, permanent and also design bugs). Our solutions embrace a paradigm where fault tolerance is built based on exploiting high-level microarchitectural invariants that are reusable across designs, rather than relying on re-execution or ad-hoc block-level protection. To do so, we decompose the basic functionalities of processors into high-level tasks and propose three novel runtime verification solutions that combined enable global error detection: a computation/register dataflow checker, a memory dataflow checker, and a control flow checker. The techniques use the concept of end-to-end signatures and allow designers to adjust the fault coverage to their needs, by trading-off area, power and performance. Our fault injection studies reveal that our methods provide high coverage levels while causing significantly lower performance, power and area costs than existing techniques. Then, this thesis extends the applicability of the proposed error detection schemes to the validation phases. We present a fault localization and diagnosis solution for the memory dataflow by combining our error detection mechanism, a new low-cost logging mechanism and a diagnosis program. Selected internal activity is continuously traced and kept in a memory-resident log whose capacity can be expanded to suite validation needs. The solution can catch undiscovered bugs, reducing the dependence on simulation farms that compute golden outputs. Upon error detection, the diagnosis algorithm analyzes the log to automatically locate the bug, and also to determine its root cause. Our evaluations show that very high localization coverage and diagnosis accuracy can be obtained at very low performance and area costs. The net result is a simplification of current debugging practices, which are extremely manual, time consuming and cumbersome. Altogether, the integrated solutions proposed in this thesis capacitate the industry to deliver more reliable and correct processors as technology evolves into more complex designs and more vulnerable transistors.El continuo escalado de los transistores junto con los avances microarquitect贸nicos han posibilitado la presencia de potentes procesadores en todos los segmentos de mercado. Sin embargo, varios problemas de fiabilidad est谩n desafiando la producci贸n de sistemas robustos. Las predicciones de "soft errors" son inquietantes, especialmente para la l贸gica combinacional: soluciones como ECC o paridad se est谩n volviendo insuficientes a medida que dicha l贸gica se convierte en la fuente predominante de soft errors. Adem谩s, los expertos est谩n alertando acerca de la necesidad de detectar otras fuentes de fallos (causantes de errores permanentes e intermitentes) durante el tiempo de vida de los procesadores. Los segmentos "commodity" son los m谩s vulnerables, ya que imponen unos requisitos que las t茅cnicas actuales de fiabilidad no ofrecen. Estas soluciones (generalmente basadas en re-ejecuci贸n) fueron ideadas en un tiempo en el que con tal de alcanzar altos nivel de fiabilidad se asum铆an grandes costes. Son por tanto necesarias nuevas t茅cnicas que permitan la protecci贸n contra fallos en los segmentos m谩s populares. La complejidad de los dise帽os est谩 encareciendo la validaci贸n "post-silicon". Su coste excede el de dise帽o, y el n煤mero de errores descubiertos est谩 aumentando durante la validaci贸n y ya en manos de los clientes. La localizaci贸n y el diagn贸stico de errores son los mayores problemas, empeorados por las altas latencias en la manifestaci贸n de errores, por la poca observabilidad interna y por el coste de generar las se帽ales esperadas. Esta tesis explora dos direcciones para tratar algunos de los retos causados por la creciente vulnerabilidad hardware y por las limitaciones de los enfoques de validaci贸n. Primero exploramos mecanismos para detectar m煤ltiples fuentes de fallos durante el tiempo de vida de los procesadores (errores transitorios, intermitentes, permanentes y de dise帽o). Nuestras soluciones son de un paradigma donde la fiabilidad se construye explotando invariantes microarquitect贸nicos gen茅ricos, en lugar de basarse en re-ejecuci贸n o en protecci贸n ad-hoc. Para ello descomponemos las funcionalidades b谩sicas de un procesador y proponemos tres soluciones de `runtime verification' que combinadas permiten una detecci贸n de errores a nivel global. Estas tres soluciones son: un verificador de flujo de datos de registro y de computaci贸n, un verificador de flujo de datos de memoria y un verificador de flujo de control. Nuestras t茅cnicas usan el concepto de firmas y permiten a los dise帽adores ajustar los niveles de protecci贸n a sus necesidades, mediante compensaciones en 谩rea, consumo energ茅tico y rendimiento. Nuestros estudios de inyecci贸n de errores revelan que los m茅todos propuestos obtienen altos niveles de protecci贸n, a la vez que causan menos costes que las soluciones existentes. A continuaci贸n, esta tesis explora la aplicabilidad de estos esquemas a las fases de validaci贸n. Proponemos una soluci贸n de localizaci贸n y diagn贸stico de errores para el flujo de datos de memoria que combina nuestro mecanismo de detecci贸n de errores, junto con un mecanismo de logging de bajo coste y un programa de diagn贸stico. Cierta actividad interna es continuamente registrada en una zona de memoria cuya capacidad puede ser expandida para satisfacer las necesidades de validaci贸n. La soluci贸n permite descubrir bugs, reduciendo la necesidad de calcular los resultados esperados. Al detectar un error, el algoritmo de diagn贸stico analiza el registro para autom谩ticamente localizar el bug y determinar su causa. Nuestros estudios muestran un alto grado de localizaci贸n y de precisi贸n de diagn贸stico a un coste muy bajo de rendimiento y 谩rea. El resultado es una simplificaci贸n de las pr谩cticas actuales de depuraci贸n, que son enormemente manuales, inc贸modas y largas. En conjunto, las soluciones de esta tesis capacitan a la industria a producir procesadores m谩s fiables, a medida que la tecnolog铆a evoluciona hacia dise帽os m谩s complejos y m谩s vulnerables
    corecore