44 research outputs found

    Novel fault tolerant Multi-Bit Upset (MBU) Error-Detection and Correction (EDAC) architecture

    Get PDF
    Desde el punto de vista de seguridad, la certificación aeronáutica de aplicaciones críticas de vuelo requiere diferentes técnicas que son usadas para prevenir fallos en los equipos electrónicos. Los fallos de tipo hardware debido a la radiación solar que existe a las alturas standard de vuelo, como SEU (Single Event Upset) y MCU (Multiple Bit Upset), provocan un cambio de estado de los bits que soportan la información almacenada en memoria. Estos fallos se producen, por ejemplo, en la memoria de configuración de una FPGA, que es donde se definen todas las funcionalidades. Las técnicas de protección requieren normalmente de redundancias que incrementan el coste, número de componentes, tamaño de la memoria y peso. En la fase de desarrollo de aplicaciones críticas de vuelo, generalmente se utilizan una serie de estándares o recomendaciones de diseño como ABD100, RTCA DO-160, IEC62395, etc, y diferentes técnicas de protección para evitar fallos del tipo SEU o MCU. Estas técnicas están basadas en procesos tecnológicos específicos como memorias robustas, codificaciones para detección y corrección de errores (EDAC), redundancias software, redundancia modular triple (TMR) o soluciones a nivel sistema. Esta tesis está enfocada a minimizar e incluso suprimir los efectos de los SEUs y MCUs que particularmente ocurren en la electrónica de avión como consecuencia de la exposición a radiación de partículas no cargadas (como son los neutrones) que se encuentra potenciada a las típicas alturas de vuelo. La criticidad en vuelo que tienen determinados sistemas obligan a que dichos sistemas sean tolerantes a fallos, es decir, que garanticen un correcto funcionamiento aún cuando se produzca un fallo en ellos. Es por ello que soluciones como las presentadas en esta tesis tienen interés en el sector industrial. La Tesis incluye una descripción inicial de la física de la radiación incidente sobre aeronaves, y el análisis de sus efectos en los componentes electrónicos aeronaúticos basados en semiconductor, que desembocan en la generación de SEUs y MCUs. Este análisis permite dimensionar adecuadamente y optimizar los procedimientos de corrección que se propongan posteriormente. La Tesis propone un sistema de corrección de fallos SEUs y MCUs que permita cumplir la condición de Sistema Tolerante a Fallos, a la vez que minimiza los niveles de redundancia y de complejidad de los códigos de corrección. El nivel de redundancia es minimizado con la introducción del concepto propuesto HSB (Hardwired Seed Bits), en la que se reduce la información esencial a unos pocos bits semilla, neutros frente a radiación. Los códigos de corrección requeridos se reducen a la corrección de un único error, gracias al uso del concepto de Distancia Virtual entre Bits, a partir del cual será posible corregir múltiples errores simultáneos (MCUs) a partir de códigos simples de corrección. Un ejemplo de aplicación de la Tesis es la implementación de una Protección Tolerante a Fallos sobre la memoria SRAM de una FPGA. Esto significa que queda protegida no sólo la información contenida en la memoria sino que también queda auto-protegida la función de protección misma almacenada en la propia SRAM. De esta forma, el sistema es capaz de auto-regenerarse ante un SEU o incluso un MCU, independientemente de la zona de la SRAM sobre la que impacte la radiación. Adicionalmente, esto se consigue con códigos simples tales como corrección por bit de paridad y Hamming, minimizando la dedicación de recursos de computación hacia tareas de supervisión del sistema.For airborne safety critical applications certification, different techniques are implemented to prevent failures in electronic equipments. The HW failures at flying heights of aircrafts related to solar radiation such as SEU (Single-Event-Upset) and MCU (Multiple Bit Upset), causes bits alterations that corrupt the information at memories. These HW failures cause errors, for example, in the Configuration-Code of an FPGA that defines the functionalities. The protection techniques require classically redundant functionalities that increases the cost, components, memory space and weight. During the development phase for airborne safety critical applications, different aerospace standards are generally recommended as ABD100, RTCA-DO160, IEC62395, etc, and different techniques are classically used to avoid failures such as SEU or MCU. These techniques are based on specific technology processes, Hardened memories, error detection and correction codes (EDAC), SW redundancy, Triple Modular Redundancy (TMR) or System level solutions. This Thesis is focussed to minimize, and even to remove, the effects of SEUs and MCUs, that particularly occurs in the airborne electronics as a consequence of its exposition to solar radiation of non-charged particles (for example the neutrons). These non-charged particles are even powered at flying altitudes due to aircraft volume. The safety categorization of different equipments/functionalities requires a design based on fault-tolerant approach that means, the system will continue its normal operation even if a failure occurs. The solution proposed in this Thesis is relevant for the industrial sector because of its Fault-tolerant capability. Thesis includes an initial description for the physics of the solar radiation that affects into aircrafts, and also the analyses of their effects into the airborne electronics based on semiconductor components that create the SEUs and MCUs. This detailed analysis allows the correct sizing and also the optimization of the procedures used to correct the errors. This Thesis proposes a system that corrects the SEUs and MCUs allowing the fulfilment of the Fault-Tolerant requirement, reducing the redundancy resources and also the complexity of the correction codes. The redundancy resources are minimized thanks to the introduction of the concept of HSB (Hardwired Seed Bits), in which the essential information is reduced to a few seed bits, neutral to radiation. The correction codes required are reduced to the correction of one error thanks to the use of the concept of interleaving distance between adjacent bits, this allows the simultaneous multiple error correction with simple single error correcting codes. An example of the application of this Thesis is the implementation of the Fault-tolerant architecture of an SRAM-based FPGA. That means that the information saved in the memory is protected but also the correction functionality is auto protected as well, also saved into SRAM memory. In this way, the system is able to self-regenerate the information lost in case of SEUs or MCUs. This is independent of the SRAM area affected by the radiation. Furthermore, this performance is achieved by means simple error correcting codes, as parity bits or Hamming, that minimize the use of computational resources to this supervision tasks for system.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Luis Alfonso Entrena Arrontes.- Secretario: Pedro Reviriego Vasallo.- Vocal: Mª Luisa López Vallej

    Calcul sur architecture non fiable

    Get PDF
    Although materials could be fabricated as error-free theoretically with a huge cost for worst-case design methodologies, the circuit is still susceptible to transient faults by the effects of radiation, temperature sensitivity, and etc. On the contrary, an error-resilient design enables the manufacturing process to be relieved from the variability issue so as to save material cost. Since variability and transient upsets are worsening as emerging fabrication process and size shrink are tending intense, the requirement of robust design is imminent. This thesis addresses the issue of designing on unreliable circuit. The main contributions are fourfold. Firstly a fast error-correction and low cost redundancy fault-tolerant method is presented. Moreover, we introduce judicious two-dimensional criteria to estimate the reliability and the hardware efficiency of a circuit. A general-purpose model offers low-redundancy error-resilience for contemporary logic systems as well as future nanoeletronic architectures. At last, a decoder against internal transient faults is designed in this work.En théorie, les circuits électroniques conçus selon la méthode du pire-cas sont supposés garantir un fonctionnement sans erreur pourun coût d’implémentation élevé. Dans la pratique les circuits restent sujets aux erreurs transitoires du fait de leur sensibilité aux aléastels que la radiation et la température. En revanche, une conception prenant en compte la tolérance aux fautes permet de faire face à detels aléas comme la variabilité du processus de fabrication. De plus, les erreurs transitoires et la variabilité de fabrication s’intensifientavec l’émergence de nouveaux processus de fabrication et des circuits de dimension de plus en plus réduite. La demande d’une conceptionintégrant la tolérance aux fautes devient désormais primordiale. La présente thèse a pour objectif de cerner la problématique de laconception de circuits sur des puces peu fiables et apporte des contributions suivant quatre aspects. Dans un premier temps, nous proposonsune méthode de tolérance aux fautes, basée sur la correction d’erreurs et la redondance à faible coût. Puis, nous présentonsun critère bidimensionnel judicieux permettant d’évaluer la fiabilité et l’efficacité matérielle de circuits. Nous proposons ensuite un modèleuniversel qui apporte une tolérance avec fautes à redondance faible pour les systèmes logiques d’aujourd’hui et les architecturesnanoélectroniques de demain. Enfin, nous découvrons un décodeur tolérant aux fautes transitoires internes

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Conception, optimisation, et vérification formelle de techniques de tolérance aux fautes pour circuits

    Get PDF
    Technology shrinking and voltage scaling increase the risk of fault occurrences in digital circuits. To address this challenge, engineers use fault-tolerance techniques to mask or, at least, to detect faults. These techniques are especially needed in safety critical domains (e.g., aerospace, medical, nuclear, etc.), where ensuring the circuit functionality and fault-tolerance is crucial. However, the verification of functional and fault-tolerance properties is a complex problem that cannot be solved with simulation-based methodologies due to the need to check a huge number of executions and fault occurrence scenarios. The optimization of the overheads imposed by fault-tolerance techniques also requires the proof that the circuit keeps its fault-tolerance properties after the optimization.In this work, we propose a verification-based optimization of existing fault-tolerance techniques as well as the design of new techniques and their formal verification using theorem proving. We first investigate how some majority voters can be removed from Triple-Modular Redundant (TMR) circuits without violating their fault-tolerance properties. The developed methodology clarifies how to take into account circuit native error-masking capabilities that may exist due to the structure of the combinational part or due to the way the circuit is used and communicates with the surrounding device.Second, we propose a family of time-redundant fault-tolerance techniques as automatic circuit transformations. They require less hardware resources than TMR alternatives and could be easily integrated in EDA tools. The transformations are based on the novel idea of dynamic time redundancy that allows the redundancy level to be changed "on-the-fly" without interrupting the computation. Therefore, time-redundancy can be used only in critical situations (e.g., above Earth poles where the radiation level is increased), during the processing of crucial data (e.g., the encryption of selected data), or during critical processes (e.g., a satellite computer reboot).Third, merging dynamic time redundancy with a micro-checkpointing mechanism, we have created a double-time redundancy transformation capable of masking transient faults. Our technique makes the recovery procedure transparent and the circuit input/output behavior remains unchanged even under faults. Due to the complexity of that method and the need to provide full assurance of its fault-tolerance capabilities, we have formally certified the technique using the Coq proof assistant. The developed proof methodology can be applied to certify other fault-tolerance techniques implemented through circuit transformations at the netlist level.La miniaturisation de la gravure et l'ajustement dynamique du voltage augmentent le risque de fautes dans les circuits intégrés. Pour pallier cet inconvénient, les ingénieurs utilisent des techniques de tolérance aux fautes pour masquer ou, au moins, détecter les fautes. Ces techniques sont particulièrement utilisées dans les domaines critiques (aérospatial, médical, nucléaire, etc.) où les garanties de bon fonctionnement des circuits et leurs tolérance aux fautes sont cruciales. Cependant, la vérification de propriétés fonctionnelles et de tolérance aux fautes est un problème complexe qui ne peut être résolu par simulation en raison du grand nombre d'exécutions possibles et de scénarios d'occurrence des fautes. De même, l'optimisation des surcoûts matériels ou temporels imposés par ces techniques demande de garantir que le circuit conserve ses propriétés de tolérance aux fautes après optimisation.Dans cette thèse, nous décrivons une optimisation de techniques de tolérance aux fautes classiques basée sur des analyses statiques, ainsi que de nouvelles techniques basées sur la redondance temporelle. Nous présentons comment leur correction peut être vérifiée formellement à l'aide d'un assistant de preuves.Nous étudions d'abord comment certains voteurs majoritaires peuvent être supprimés des circuits basés sur la redondance matérielle triple (TMR) sans violer leurs propriétés de tolérance. La méthodologie développée prend en compte les particularités des circuits (par ex. masquage logique d'erreurs) et des entrées/sorties pour optimiser la technique TMR.Deuxièmement, nous proposons une famille de techniques utilisant la redondance temporelle comme des transformations automatiques de circuits. Elles demandent moins de ressources matérielles que TMR et peuvent être facilement intégrés dans les outils de CAO. Les transformations sont basées sur une nouvelle idée de redondance temporelle dynamique qui permet de modifier le niveau de redondance «à la volée» sans interrompre le calcul. Le niveau de redondance peut être augmenté uniquement dans les situations critiques (par exemple, au-dessus des pôles où le niveau de rayonnement est élevé), lors du traitement de données cruciales (par exemple, le cryptage de données sensibles), ou pendant des processus critiques (par exemple, le redémarrage de l'ordinateur d'un satellite).Troisièmement, en associant la redondance temporelle dynamique avec un mécanisme de micro-points de reprise, nous proposons une transformation avec redondance temporelle double capable de masquer les fautes transitoires. La procédure de recouvrement est transparente et le comportement entrée/sortie du circuit reste identique même lors d'occurrences de fautes. En raison de la complexité de cette méthode, la garantie totale de sa correction a nécessité une certification formelle en utilisant l'assistant de preuves Coq. La méthodologie développée peut être appliquée pour certifier d'autres techniques de tolérance aux fautes exprimées comme des transformations de circuits

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Self-healing concepts involving fine-grained redundancy for electronic systems

    Get PDF
    The start of the digital revolution came through the metal-oxide-semiconductor field-effect transistor (MOSFET) in 1959 followed by massive integration onto a silicon die by means of constant down scaling of individual components. Digital systems for certain applications require fault-tolerance against faults caused by temporary or permanent influence. The most widely used technique is triple module redundancy (TMR) in conjunction with a majority voter, which is regarded as a passive fault mitigation strategy. Design by functional resilience has been applied to circuit structures for increased fault-tolerance and towards self-diagnostic triggered self-healing. The focus of this thesis is therefore to develop new design strategies for fault detection and mitigation within transistor, gate and cell design levels. The research described in this thesis makes three contributions. The first contribution is based on adding fine-grained transistor level redundancy to logic gates in order to accomplish stuck-at fault-tolerance. The objective is to realise maximum fault-masking for a logic gate with minimal added redundant transistors. In the case of non-maskable stuck-at faults, the gate structure generates an intrinsic indication signal that is suitable for autonomous self-healing functions. As a result, logic circuitry utilising this design is now able to differentiate between gate faults and faults occurring in inter-gate connections. This distinction between fault-types can then be used for triggering selective self-healing responses. The second contribution is a logic matrix element which applies the three core redundancy concepts of spatial- temporal- and data-redundancy. This logic structure is composed of quad-modular redundant structures and is capable of selective fault-masking and localisation depending of fault-type at the cell level, which is referred to as a spatiotemporal quadded logic cell (QLC) structure. This QLC structure has the capability of cellular self-healing. Through the combination of fault-tolerant and masking logic features the QLC is designed with a fault-behaviour that is equal to existing quadded logic designs using only 33.3% of the equivalent transistor resources. The inherent self-diagnosing feature of QLC is capable of identifying individual faulty cells and can trigger self-healing features. The final contribution is focused on the conversion of finite state machines (FSM) into memory to achieve better state transition timing, minimal memory utilisation and fault protection compared to common FSM designs. A novel implementation based on content-addressable type memory (CAM) is used to achieve this. The FSM is further enhanced by creating the design out of logic gates of the first contribution by achieving stuck-at fault resilience. Applying cross-data parity checking, the FSM becomes equipped with single bit fault detection and correction

    Flexible error handling for embedded real time systems

    Get PDF
    Due to advancements of semiconductor fabrication that lead to shrinking geometries and lowered supply voltages of semiconductor devices, transient fault rates will increase significantly for future semiconductor generations [Int13]. To cope with transient faults, error detection and correction is mandatory. However, additional resources are required for their implementation. This is a serious problem in embedded systems development since embedded systems possess only a limited number of resources, like processing time, memory, and energy. To cope with this problem, a software-based flexible error handling approach is proposed in this dissertation. The goal of flexible error handling is to decide if, how, and when errors have to be corrected. By applying this approach, deadline misses will be reduced by up to 97% for the considered video decoding benchmark. Furthermore, it will be shown that the approach is able to cope with very high error rates of nearly 50 errors per second

    Minimization of Vote Operations for Soft Error Detection in DMR Design with Error Correction by Operation Re-Execution

    No full text

    Computational imaging and automated identification for aqueous environments

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2011Sampling the vast volumes of the ocean requires tools capable of observing from a distance while retaining detail necessary for biology and ecology, ideal for optical methods. Algorithms that work with existing SeaBED AUV imagery are developed, including habitat classi fication with bag-of-words models and multi-stage boosting for rock sh detection. Methods for extracting images of sh from videos of longline operations are demonstrated. A prototype digital holographic imaging device is designed and tested for quantitative in situ microscale imaging. Theory to support the device is developed, including particle noise and the effects of motion. A Wigner-domain model provides optimal settings and optical limits for spherical and planar holographic references. Algorithms to extract the information from real-world digital holograms are created. Focus metrics are discussed, including a novel focus detector using local Zernike moments. Two methods for estimating lateral positions of objects in holograms without reconstruction are presented by extending a summation kernel to spherical references and using a local frequency signature from a Riesz transform. A new metric for quickly estimating object depths without reconstruction is proposed and tested. An example application, quantifying oil droplet size distributions in an underwater plume, demonstrates the efficacy of the prototype and algorithms.Funding was provided by NOAA Grant #5710002014, NOAA NMFS Grant #NA17RJ1223, NSF Grant #OCE-0925284, and NOAA Grant #NA10OAR417008

    Probing The Cosmic Microwave Background Radiation With Actpol: A Millimeter-Wavelength, Polarization-Sensitive Receiver For The Atacama Cosmology Telescope

    Get PDF
    In this dissertation manuscript, we document the design, development, characterization, and scientific application of next-generation millimeter wavelength imaging technologies, conducted under the auspices of a NASA Space Technology Research Fellowship (NSTRF-11) grant, based at the University of Pennsylvania and completed under the advisement of Professor Mark J. Devlin. NASA’s Science Mission Directorate, supported by recommendations from the National Research Council (NRC) Decadal Survey, has placed the development of mission-enabling technologies for future-generation, seminal orbital platforms probing the nature of the early universe and cosmic inflation at the forefront of directives in space technology research. To work toward the rapid achievement of these directives, we highlight considerations for the design and integration of ACTPol, a new receiver for the Atacama Cosmology Telescope (ACT), capable of making millimeter-wavelength, polarization-sensitive observations of the Cosmic Microwave Background (CMB) at arcminute angular scales. ACT is a six-meter telescope located in northern Chile, dedicated to enhancing our understanding of the structure and evolution of the early universe by direct measurement of the CMB. We will focus first on the manner by which the integrated millimeter-wavelength imaging technologies of ACTPol with critical upgrades deployed to the ACT telescope superstructure and site, will enable the instrument to address a myriad of high-priority topics in experimental cosmology. We will then consider the design of the first ACTPol 150 GHz detector array package, which, along with a second 150 GHz array package and a multichroic array package with simultaneous 90 GHz and 150 GHz sensitivity, and associated optomechanical subsystems, comprises the ACTPol focal plane and, ultimately, receiver. Each of these detector array packages reside behind a set of normal-incidence, high-purity silicon reimaging optics with a novel anti-reflective coating geometry, the development flow of which will be detailed. As a root design system, the 150 GHz polarimeter array package consists of 1044 transition-edge sensor (TES) bolometers used to measure the response of 522 feedhorn-coupled polarimeters, which enable characterization of the linear orthogonal polarization of incident CMB radiation. The polarimeters are arranged in three hexagonal and three semi-hexagonal silicon wafer stacks, mechanically coupled to an octakaidecagonal, monolithic corrugated silicon feedhorn array (~140 mm diameter). Readout of the TES polarimeters is achieved using time-division SQUID multiplexing (TDM). The three polarimeter array packages comprising the ACTPol focal plane, and associated optical and cryomechanical elements of the fully integrated ACTPol receiver are cooled via a custom-designed, field-deployable dilution refrigerator (DR) providing a 100 mK bath temperature to the detectors, which have a target Tc of 150 mK. Given the unique cryomechanical constraints associated with this large-scale monolithic superconducting focal plane, we address the design considerations necessary for integration with the optical and cryogenic elements of the ACTPol receiver. The ACTPol receiver deployment and early operational protocol development are highlighted, and receiver laboratory and on-site operational optical, cryogenic, and detector performance characterization is considered. ACTPol early scientific operational results are then detailed, including CMB polarization measurements between l=200 and l=9000, measurements of galaxy clusters via the Sunyaev-Zel`dovich effect, and the first set of maps and associated analysis of ACTPol first season galactic field observations. Consideration is also given to work underway toward the realization of Advanced ACTPol, a next-generation receiver for ACT, with the enhanced capability to measure large-angular-scale regimes to probe inflationary cosmology. Finally, an outlook is provided in which lessons-learned from the development of a distributed portfolio of ACTPol and Advanced ACTPol receiver technologies will impact the realization of both near-term global-scale development efforts in experimental cosmology, including the CMB Stage-IV program, and, ultimately, a future-generation seminal CMB inflationary probe satellite platform
    corecore