17 research outputs found

    Using Fine Grain Approaches for highly reliable Design of FPGA-based Systems in Space

    Get PDF
    Nowadays using SRAM based FPGAs in space missions is increasingly considered due to their flexibility and reprogrammability. A challenge is the devices sensitivity to radiation effects that increased with modern architectures due to smaller CMOS structures. This work proposes fault tolerance methodologies, that are based on a fine grain view to modern reconfigurable architectures. The focus is on SEU mitigation challenges in SRAM based FPGAs which can result in crucial situations

    Characterization of Interconnection Delays in FPGAS Due to Single Event Upsets and Mitigation

    Get PDF
    RÉSUMÉ L’utilisation incessante de composants électroniques à géométrie toujours plus faible a engendré de nouveaux défis au fil des ans. Par exemple, des semi-conducteurs à mémoire et à microprocesseur plus avancés sont utilisés dans les systèmes avioniques qui présentent une susceptibilité importante aux phénomènes de rayonnement cosmique. L'une des principales implications des rayons cosmiques, observée principalement dans les satellites en orbite, est l'effet d'événements singuliers (SEE). Le rayonnement atmosphérique suscite plusieurs préoccupations concernant la sécurité et la fiabilité de l'équipement avionique, en particulier pour les systèmes qui impliquent des réseaux de portes programmables (FPGA). Les FPGA à base de cellules de mémoire statique (SRAM) présentent une solution attrayante pour mettre en oeuvre des systèmes complexes dans le domaine de l’avionique. Les expériences de rayonnement réalisées sur les FPGA ont dévoilé la vulnérabilité de ces dispositifs contre un type particulier de SEE, à savoir, les événements singuliers de changement d’état (SEU). Un SEU est considérée comme le changement de l'état d'un élément bistable (c'est-à-dire, un bit-flip) dû à l'effet d'un ion, d'un proton ou d’un neutron énergétique. Cet effet est non destructif et peut être corrigé en réécrivant la partie de la SRAM affectée. Les changements de délai (DC) potentiels dus aux SEU affectant la mémoire de configuration de routage ont été récemment confirmés. Un des objectifs de cette thèse consiste à caractériser plus précisément les DC dans les FPGA causés par les SEU. Les DC observés expérimentalement sont présentés et la modélisation au niveau circuit de ces DC est proposée. Les circuits impliqués dans la propagation du délai sont validés en effectuant une modélisation précise des blocs internes à l'intérieur du FPGA et en exécutant des simulations. Les résultats montrent l’origine des DC qui sont en accord avec les mesures expérimentales de délais. Les modèles proposés au niveau circuit sont, aux meilleures de notre connaissance, le premier travail qui confirme et explique les délais combinatoires dans les FPGA. La conception d'un circuit moniteur de délai pour la détection des DC a été faite dans la deuxième partie de cette thèse. Ce moniteur permet de détecter un changement de délai sur les sections critiques du circuit et de prévenir les pannes de synchronisation engendrées par les SEU sans utiliser la redondance modulaire triple (TMR).----------ABSTRACT The unrelenting demand for electronic components with ever diminishing feature size have emerged new challenges over the years. Among them, more advanced memory and microprocessor semiconductors are being used in avionic systems that exhibit a substantial susceptibility to cosmic radiation phenomena. One of the main implications of cosmic rays, which was primarily observed in orbiting satellites, is single-event effect (SEE). Atmospheric radiation causes several concerns regarding the safety and reliability of avionics equipment, particularly for systems that involve field programmable gate arrays (FPGA). SRAM-based FPGAs, as an attractive solution to implement systems in aeronautic sector, are very susceptible to SEEs in particular Single Event Upset (SEU). An SEU is considered as the change of the state of a bistable element (i.e., bit-flip) due to the effect of an energetic ion or proton. This effect is non-destructive and may be fixed by rewriting the affected part. Sensitivity evaluation of SRAM-based FPGAs to a physical impact such as potential delay changes (DC) has not been addressed thus far in the literature. DCs induced by SEU can affect the functionality of the logic circuits by disturbing the race condition on critical paths. The objective of this thesis is toward the characterization of DCs in SRAM-based FPGAs due to transient ionizing radiation. The DCs observed experimentally are presented and the circuit-level modeling of those DCs is proposed. Circuits involved in delay propagation are reverse-engineered by performing precise modeling of internal blocks inside the FPGA and executing simulations. The results show the root cause of DCs that are in good agreement with experimental delay measurements. The proposed circuit level models are, to the best of our knowledge, the first work on modeling of combinational delays in FPGAs.In addition, the design of a delay monitor circuit for DC detection is investigated in the second part of this thesis. This monitor allowed to show experimentally cumulative DCs on interconnects in FPGA. To this end, by avoiding the use of triple modular redundancy (TMR), a mitigation technique for DCs is proposed and the system downtime is minimized. A method is also proposed to decrease the clock frequency after DC detection without interrupting the process

    Development and Qualification of an FPGA-Based Multi-Processor System-on-Chip On-Board Computer for LEO Satellites

    Get PDF
    九州工業大学博士学位論文 学位記番号:工博甲第374号 学位授与年月日:平成26年9月26日Chapter 1: Introduction||Chapter 2: Background and Literature Review||Chapter 3: Multi-Processor System-on-Chip On-Borad Computer Design||Chapter 4: Space and Time Redundancy Trade-offs||Chapter 5: Radiation and Fault Injection Testing||Chapter 6: Thermal Vacuum Testing||Chapter 7: Results and Discussion||Chapter 8: Conclusion and Future Perspectives||ReferencesDeveloping small satellites for scientific and commercial purposes is emerging rapidly in the last decade. The future is still expected to carry more challenging services and designs to fulfill the growing needs for space based services. Nevertheless, there exists a big challenge in developing cost effective and highly efficient small satellites yet with accepted reliability and power consumption that is adequate to the mission capabilities. This challenge mandates the use of the recent developments in digital design techniques and technologies to strike the required balance between the four basic parameters: 1) Cost, 2) Performance, 3) Reliability and 4) Power consumption. This balance becomes even more stringent and harder to reach when the satellite mass reduces significantly. Mass reduction puts strict constraints on the power system in terms of the solar panels and the batteries. That fact creates the need to miniaturize the design of the subsystems as much as possible which can be viewed as the fifth parameter in the design balance dilemma. At Kyuhsu Institute of Technology-Japan we are investigating the use of SRAMbased Field Programmable Gate Arrays (FPGA) in building: 1) High performance, 2)Low cost, 3) Moderate power consumption and 4) Highly reliable Muti-Processor System-on-Chip (MPSoC) On-Board Computers (OBC) for future space missions and applications. This research tries to investigate how commercial grade SRAMbased FPGAs would perform in space and how to mitigate them against the space environment. Our methodology to answer that question depended on following formal design procedure for the OBC according to the space environment requirements then qualifying the design through extensive testing. We developed the MPSoC OBC with 4 complete embedded processor systems. The Inter Processor Communication (IPC) takes place through hardware First-In-First-Out (FIFO) mailboxes. One processor acts as the system master controller which monitors the operation and controls the reset and restore of the system in case of faults and the other three processors form Triple Modular Redundancy (TMR) fault tolerance architecture with each other. We used Dynamic Partial Reconfiguration (DPR) in scrubbing the configuration memory frames and correcting the faults that might exist. The system is implemented using a Virtex-5 LX50 commercial grade FPGA from Xilinx. The research also qualifies the design in the ground-simulated space environment conditions. We tested the implemented MPSoC OBC in Thermal Vacuum Chambers (TVC) at the Center of Nano-Satellite Testing (CeNT) at Kyushu Institute of Technology. Also we irradiated the design with proton accelerated beam at 65 MeV with fluxes of 10e06 and 3e06 particle/cm2/sec at the Takasaki Advanced Radiation Research Institute (TARRI). The TVC test results showed that the FPGA design exceeded the limits of normal operation for the commercial grade package at about 105 C°. Therefore, we mitigated the package using: 1) heat sink, 2) dynamic temperature management through operating frequency reduction from 100 MHz to 50 MHz and 3) reconfiguration to reduce the number of working processors to 2 instead of 4 by replacing the spaceredundancy TMR with time-redundancy TMR during the sunlight section of the orbit. The mitigation proved to be efficient and it even reduced the temperature from 105 C° to about 66 C° when the heat sink, frequency reduction, and reconfiguration techniques were used together. The radiation and the fault injection tests showed that mitigating the FPGA configuration frames through scrubbing are efficient when Single Bit Upsets (SBU) are recorded. Multiple Bit Upsets (MBU) are not well mitigated using the scrubbing with Single Error Correction Double Error Detection (SECDED) technique and the FPGA needs to be totally reset and reloaded when MBUs are detected in its configuration frames. However, as MBUs occurrence in space is very seldom and rare compared to SBUs, we consider that SECDED scrubbing is very efficient in decreasing the soft error rate and increasing the reliability of having error-free bitstreams. The reliability was proven to be at 0.9999 when the scrubbing rate was continuous at a period of 7.1 msec between complete scans of the FPGA bitstream. In the proton radiation tests we managed to develop a new technique to estimate the static cross section using internal scrubbing only without using external monitoring, control and scrubbing device. Fault injection was used to estimate the dynamic cross section in a cost effective alternative for estimating it through radiation test. The research proved through detailed testing that the 65 nm commercial grade SRAM-based FPGA can be used in future space missions. The MPSoC OBC design achieved an adequate balance between the performance, power, mass, and reliability requirements. Extensive testing and applying carefully crafted mitigation techniques were the key points to verify and validate the MPSoC OBC design. In-orbit validation through a scientific demonstration mission would be the next step for the future research

    Dynamic partial reconfiguration management for high performance and reliability in FPGAs

    Get PDF
    Modern Field-Programmable Gate Arrays (FPGAs) are no longer used to implement small “glue logic” circuitries. The high-density of reconfigurable logic resources in today’s FPGAs enable the implementation of large systems in a single chip. FPGAs are highly flexible devices; their functionality can be altered by simply loading a new binary file in their configuration memory. While the flexibility of FPGAs is comparable to General-Purpose Processors (GPPs), in the sense that different functions can be performed using the same hardware, the performance gain that can be achieved using FPGAs can be orders of magnitudes higher as FPGAs offer the ability for customisation of parallel computational architectures. Dynamic Partial Reconfiguration (DPR) allows for changing the functionality of certain blocks on the chip while the rest of the FPGA is operational. DPR has sparked the interest of researchers to explore new computational platforms where computational tasks are off-loaded from a main CPU to be executed using dedicated reconfigurable hardware accelerators configured on demand at run-time. By having a battery of custom accelerators which can be swapped in and out of the FPGA at runtime, a higher computational density can be achieved compared to static systems where the accelerators are bound to fixed locations within the chip. Furthermore, the ability of relocating these accelerators across several locations on the chip allows for the implementation of adaptive systems which can mitigate emerging faults in the FPGA chip when operating in harsh environments. By porting the appropriate fault mitigation techniques in such computational platforms, the advantages of FPGAs can be harnessed in different applications in space and military electronics where FPGAs are usually seen as unreliable devices due to their sensitivity to radiation and extreme environmental conditions. In light of the above, this thesis investigates the deployment of DPR as: 1) a method for enhancing performance by efficient exploitation of the FPGA resources, and 2) a method for enhancing the reliability of systems intended to operate in harsh environments. Achieving optimal performance in such systems requires an efficient internal configuration management system to manage the reconfiguration and execution of the reconfigurable modules in the FPGA. In addition, the system needs to support “fault-resilience” features by integrating parameterisable fault detection and recovery capabilities to meet the reliability standard of fault-tolerant applications. This thesis addresses all the design and implementation aspects of an Internal Configuration Manger (ICM) which supports a novel bitstream relocation model to enable the placement of relocatable accelerators across several locations on the FPGA chip. In addition to supporting all the configuration capabilities required to implement a Reconfigurable Operating System (ROS), the proposed ICM also supports the novel multiple-clone configuration technique which allows for cloning several instances of the same hardware accelerator at the same time resulting in much shorter configuration time compared to traditional configuration techniques. A faulttolerant (FT) version of the proposed ICM which supports a comprehensive faultrecovery scheme is also introduced in this thesis. The proposed FT-ICM is designed with a much smaller area footprint compared to Triple Modular Redundancy (TMR) hardening techniques while keeping a comparable level of fault-resilience. The capabilities of the proposed ICM system are demonstrated with two novel applications. The first application demonstrates a proof-of-concept reliable FPGA server solution used for executing encryption/decryption queries. The proposed server deploys bitstream relocation and modular redundancy to mitigate both permanent and transient faults in the device. It also deploys a novel Built-In Self- Test (BIST) diagnosis scheme, specifically designed to detect emerging permanent faults in the system at run-time. The second application is a data mining application where DPR is used to increase the computational density of a system used to implement the Frequent Itemset Mining (FIM) problem

    Dynamic reconfiguration frameworks for high-performance reliable real-time reconfigurable computing

    Get PDF
    The sheer hardware-based computational performance and programming flexibility offered by reconfigurable hardware like Field-Programmable Gate Arrays (FPGAs) make them attractive for computing in applications that require high performance, availability, reliability, real-time processing, and high efficiency. Fueled by fabrication process scaling, modern reconfigurable devices come with ever greater quantities of on-chip resources, allowing a more complex variety of applications to be developed. Thus, the trend is that technology giants like Microsoft, Amazon, and Baidu now embrace reconfigurable computing devices likes FPGAs to meet their critical computing needs. In addition, the capability to autonomously reprogramme these devices in the field is being exploited for reliability in application domains like aerospace, defence, military, and nuclear power stations. In such applications, real-time computing is important and is often a necessity for reliability. As such, applications and algorithms resident on these devices must be implemented with sufficient considerations for real-time processing and reliability. Often, to manage a reconfigurable hardware device as a computing platform for a multiplicity of homogenous and heterogeneous tasks, reconfigurable operating systems (ROSes) have been proposed to give a software look to hardware-based computation. The key requirements of a ROS include partitioning, task scheduling and allocation, task configuration or loading, and inter-task communication and synchronization. Existing ROSes have met these requirements to varied extents. However, they are limited in reliability, especially regarding the flexibility of placing the hardware circuits of tasks on device’s chip area, the problem arising more from the partitioning approaches used. Indeed, this problem is deeply rooted in the static nature of the on-chip inter-communication among tasks, hampering the flexibility of runtime task relocation for reliability. This thesis proposes the enabling frameworks for reliable, available, real-time, efficient, secure, and high-performance reconfigurable computing by providing techniques and mechanisms for reliable runtime reconfiguration, and dynamic inter-circuit communication and synchronization for circuits on reconfigurable hardware. This work provides task configuration infrastructures for reliable reconfigurable computing. Key features, especially reliability-enabling functionalities, which have been given little or no attention in state-of-the-art are implemented. These features include internal register read and write for device diagnosis; configuration operation abort mechanism, and tightly integrated selective-area scanning, which aims to optimize access to the device’s reconfiguration port for both task loading and error mitigation. In addition, this thesis proposes a novel reliability-aware inter-task communication framework that exploits the availability of dedicated clocking infrastructures in a typical FPGA to provide inter-task communication and synchronization. The clock buffers and networks of an FPGA use dedicated routing resources, which are distinct from the general routing resources. As such, deploying these dedicated resources for communication sidesteps the restriction of static routes and allows a better relocation of circuits for reliability purposes. For evaluation, a case study that uses a NASA/JPL spectrometer data processing application is employed to demonstrate the improved reliability brought about by the implemented configuration controller and the reliability-aware dynamic communication infrastructure. It is observed that up to 74% time saving can be achieved for selective-area error mitigation when compared to state-of-the-art vendor implementations. Moreover, an improvement in overall system reliability is observed when the proposed dynamic communication scheme is deployed in the data processing application. Finally, one area of reconfigurable computing that has received insufficient attention is security. Meanwhile, considering the nature of applications which now turn to reconfigurable computing for accelerating compute-intensive processes, a high premium is now placed on security, not only of the device but also of the applications, from loading to runtime execution. To address security concerns, a novel secure and efficient task configuration technique for task relocation is also investigated, providing configuration time savings of up to 32% or 83%, depending on the device; and resource usage savings in excess of 90% compared to state-of-the-art

    Towards the development of flexible, reliable, reconfigurable, and high-performance imaging systems

    Get PDF
    Current FPGAs can implement large systems because of the high density of reconfigurable logic resources in a single chip. FPGAs are comprehensive devices that combine flexibility and high performance in the same platform compared to other platform such as General-Purpose Processors (GPPs) and Application Specific Integrated Circuits (ASICs). The flexibility of modern FPGAs is further enhanced by introducing Dynamic Partial Reconfiguration (DPR) feature, which allows for changing the functionality of part of the system while other parts are functioning. FPGAs became an important platform for digital image processing applications because of the aforementioned features. They can fulfil the need of efficient and flexible platforms that execute imaging tasks efficiently as well as the reliably with low power, high performance and high flexibility. The use of FPGAs as accelerators for image processing outperforms most of the current solutions. Current FPGA solutions can to load part of the imaging application that needs high computational power on dedicated reconfigurable hardware accelerators while other parts are working on the traditional solution to increase the system performance. Moreover, the use of the DPR feature enhances the flexibility of image processing further by swapping accelerators in and out at run-time. The use of fault mitigation techniques in FPGAs enables imaging applications to operate in harsh environments following the fact that FPGAs are sensitive to radiation and extreme conditions. The aim of this thesis is to present a platform for efficient implementations of imaging tasks. The research uses FPGAs as the key component of this platform and uses the concept of DPR to increase the performance, flexibility, to reduce the power dissipation and to expand the cycle of possible imaging applications. In this context, it proposes the use of FPGAs to accelerate the Image Processing Pipeline (IPP) stages, the core part of most imaging devices. The thesis has a number of novel concepts. The first novel concept is the use of FPGA hardware environment and DPR feature to increase the parallelism and achieve high flexibility. The concept also increases the performance and reduces the power consumption and area utilisation. Based on this concept, the following implementations are presented in this thesis: An implementation of Adams Hamilton Demosaicing algorithm for camera colour interpolation, which exploits the FPGA parallelism to outperform other equivalents. In addition, an implementation of Automatic White Balance (AWB), another IPP stage that employs DPR feature to prove the mentioned novelty aspects. Another novel concept in this thesis is presented in chapter 6, which uses DPR feature to develop a novel flexible imaging system that requires less logic and can be implemented in small FPGAs. The system can be employed as a template for any imaging application with no limitation. Moreover, discussed in this thesis is a novel reliable version of the imaging system that adopts novel techniques including scrubbing, Built-In Self Test (BIST), and Triple Modular Redundancy (TMR) to detect and correct errors using the Internal Configuration Access Port (ICAP) primitive. These techniques exploit the datapath-based nature of the implemented imaging system to improve the system's overall reliability. The thesis presents a proposal for integrating the imaging system with the Robust Reliable Reconfigurable Real-Time Heterogeneous Operating System (R4THOS) to get the best out of the system. The proposal shows the suitability of the proposed DPR imaging system to be used as part of the core system of autonomous cars because of its unbounded flexibility. These novel works are presented in a number of publications as shown in section 1.3 later in this thesis

    Contributions to the fault tolerance of soft-core processors implemented in SRAM-based FPGA Systems.

    Get PDF
    239 p.Gracias al desarrollo de las tecnologías de diseño y fabricación, los circuitos electrónicos han llegado a grandes niveles de integración. De esta forma, hoy en día es posible implementar completos y complejos sistemas dentro de un único dispositivo incorporando gran variedad de elementos como: procesadores, osciladores, lazos de seguimiento de fase (PLLs), interfaces, conversores ADC y DAC, módulos de memoria, etc. A este concepto de diseño se le denomina comúnmente SoC (System-on-Chip). Una de las plataformas para implementar estos sistemas que más importancia está cobrando son las FPGAs (Field Programmable Gate Array). Históricamente la plataforma más utilizada para albergar los SoCs han sido las ASICs (Application- Specific Integrated Circuits), debido a su bajo consumo energético y su gran rendimiento. No obstante, su costoso proceso de desarrollo y fabricación hace que solo sean rentables en el caso de producciones masivas. Las FPGAs, por el contrario, al ser dispositivos configurables ofrecen, la posibilidad de implementar diseños personalizados a un coste mucho más reducido. Por otro lado, los continuos avances en la tecnología de las FPGAs están haciendo que éstas compitan con las ASICs a nivel de prestaciones (consumo, nivel de integración y eficiencia). Ciertas tecnologías de FPGA, como las SRAM y Flash, poseen una característica que las hace especialmente interesantes en multitud de diseños: la capacidad de reconfiguración. Dicha característica, que incluso puede ser realizada de forma autónoma, permite cambiar completamente el diseño hardware implementado con solo cargar en la FPGA un archivo de configuración denominado bitstream. La reconfiguración puede incluso permitir modificar una parte del circuito configurado en la matriz de la FPGA, mientras el resto del circuito implementado continua inalterado. Esto que se conoce como reconfiguración parcial dinámica, posibilita que un mismo chip albergue en su interior numerosos diseños hardware que pueden ser cargados a demanda. Gracias a la capacidad de reconfiguración, las FPGAs ofrecen numerosas ventajas como: posibilidad de personalización de diseños, capacidad de readaptación durante el funcionamiento para responder a cambios o corregir errores, mitigación de obsolescencia, diferenciación, menores costes de diseño o reducido tiempo para el lanzamiento de productos al mercado. Los SoC basados en FPGAs allanan el camino hacia un nuevo concepto de integración de hardware y software, permitiendo que los diseñadores de sistemas electrónicos sean capaces de integrar procesadores embebidos en los diseños para beneficiarse de su gran capacidad de computación. Gracias a esto, una parte importante de la electrónica hace uso de la tecnología FPGA abarcando un gran abanico de campos, como por ejemplo: la electrónica de consumo y el entretenimiento, la medicina o industrias como la espacial, la aviónica, la automovilística o la militar. Las tecnologías de FPGA existentes ofrecen dos vías de utilización de procesado- res embebidos: procesadores hardcore y procesadores softcore. Los hardcore son procesadores discretos integrados en el mismo chip de la FPGA. Generalmente ofrecen altas frecuencias de trabajo y una mayor previsibilidad en términos de rendimiento y uso del área, pero su diseño hardware no puede alterarse para ser personalizado. Por otro lado, un procesador soft-core, es la descripción hardware en lenguaje HDL (normalmente VDHL o Verilog) de un procesador, sintetizable e implementable en una FPGA. Habitualmente, los procesadores softcore suelen basarse en diseños hardware ya existentes, siendo compatibles con sus juegos de instrucciones, muchos de ellos en forma de IP cores (Intellectual Property co- res). Los IP cores ofrecen procesadores softcore prediseñados y testeados, que dependiendo del caso pueden ser de pago, gratuitos u otro tipo de licencias. Debido a su naturaleza, los procesadores softcore, pueden ser personalizados para una adaptación óptima a diseños específicos. Así mismo, ofrecen la posibilidad de integrar en el diseño tantos procesadores como se desee (siempre que haya disponibles recursos lógicos suficientes). Otra ventaja importante es que, gracias a la reconfiguración parcial dinámica, es posible añadir el procesador al diseño únicamente en los casos necesarios, ahorrando de esta forma, recursos lógicos y consumo energético. Uno de los mayores problemas que surgen al usar dispositivos basados en las tecnologías SRAM o la flash, como es el caso de las FPGAs, es que son especialmente sensibles a los efectos producidos por partículas energéticas provenientes de la radiación cósmica (como protones, neutrones, partículas alfa u otros iones pesados) denominados efectos de eventos simples o SEEs (Single Event Effects). Estos efectos pueden ocasionar diferentes tipos de fallos en los sistemas: desde fallos despreciables hasta fallos realmente graves que comprometan la funcionalidad del sistema. El correcto funcionamiento de los sistemas cobra especial relevancia cuando se trata de tecnologías de elevado costo o aquellas en las que peligran vidas humanas, como, por ejemplo, en campos tales como el transporte ferroviario, la automoción, la aviónica o la industria aeroespacial. Dependiendo de distintos factores, los SEEs pueden causar fallos de operación transitorios, cambios de estados lógicos o daños permanentes en el dispositivo. Cuando se trata de un fallo físico permanente se denomina hard-error, mientras que cuando el fallo afecta el circuito momentáneamente se denomina soft-error. Los SEEs más frecuentes son los soft-errors y afectan tanto a aplicaciones comerciales a nivel terrestre, como a aplicaciones aeronáuticas y aeroespaciales (con mayor incidencia en estas últimas). La contribución exacta de este tipo de fallos a la tasa de errores depende del diseño específico de cada circuito, pero en general se asume que entorno al 90 % de la tasa de error se debe a fallos en elementos de memoria (latches, biestables o celdas de memoria). Los soft-errors pueden afectar tanto al circuito lógico como al bitstream cargado en la memoria de configuración de la FPGA. Debido a su gran tamaño, la memoria de configuración tiene más probabilidades de ser afectada por un SEE. La existencia de problemas generados por estos efectos reafirma la importancia del concepto de tolerancia a fallos. La tolerancia a fallos es una propiedad relativa a los sistemas digitales, por la cual se asegura cierta calidad en el funcionamiento ante la presencia de fallos, debiendo los sistemas poder soportar los efectos de dichos fallos y funcionar correctamente en todo momento. Por tanto, para lograr un diseño robusto, es necesario garantizar la funcionalidad de los circuitos y asegurar la seguridad y confiabilidad en las aplicaciones críticas que puedan verse comprometidos por los SEE. A la hora de hacer frente a los SEE existe la posibilidad de explotar tecnologías específicas centradas en la tolerancia a fallos, como por ejemplo las FPGAs de tipo fusible, o, por otro lado, utilizar la tecnología comercial combinada con técnicas de tolerancia a fallos. Esta última opción va cobrando importancia debido al menor precio y mayores prestaciones de las FPGAs comerciales. Generalmente las técnicas de endurecimiento se aplican durante la fase de diseño. Existe un gran número de técnicas y se pueden llegar a combinar entre sí. Las técnicas prevalentes se basan en emplear algún tipo de redundancia, ya sea hardware, software, temporal o de información. Cada tipo de técnica presenta diferentes ventajas e inconvenientes y se centra en atacar distintos tipos de SEE y sus efectos. Dentro de las técnicas de tipo redundancia, la más utilizada es la hardware, que se basa en replicar el modulo a endurecer. De esta forma, cada una de las réplicas es alimentada con la misma entrada y sus salidas son comparadas para detectar discrepancias. Esta redundancia puede implementarse a diferentes niveles. En términos generales, un mayor nivel de redundancia hardware implica una mayor robustez, pero también incrementa el uso de recursos. Este incremento en el uso de recursos de una FPGA supone tener menos recursos disponibles para el diseño, mayor consumo energético, el tener más elementos susceptibles de ser afectados por un SEE y generalmente, una reducción de la máxima frecuencia alcanzable por el diseño. Por ello, los niveles de redundancia hardware más utilizados son la doble, conocida como DMR (Dual Modular Redundancy) y la triple o TMR (Triple Modular Redundancy). La DMR minimiza el número de recursos redundantes, pero presenta el problema de no poder identificar el módulo fallido ya que solo es capaz de detectar que se ha producido un error. Ello hace necesario combinarlo con técnicas adicionales. Al caso de DMR aplicado a procesadores se le denomina lockstep y se suele combinar con las técnicas checkpoint y rollback recovery. El checkpoint consiste en guardar periódicamente el contexto (contenido de registros y memorias) de instantes identificados como correctos. Gracias a esto, una vez detectado y reparado un fallo es posible emplear el rollback recovery para cargar el último contexto correcto guardado. Las desventajas de estas estrategias son el tiempo requerido por ambas técnicas (checkpoint y rollback recovery) y la necesidad de elementos adicionales (como memorias auxiliares para guardar el contexto). Por otro lado, el TMR ofrece la posibilidad de detectar el módulo fallido mediante la votación por mayoría. Es decir, si tras comparar las tres salidas una de ellas presenta un estado distinto, se asume que las otras dos son correctas. Esto permite que el sistema continúe funcionando correctamente (como sistema DMR) aun cuando uno de los módulos quede inutilizado. En todo caso, el TMR solo enmascara los errores, es decir, no los corrige. Una de las desventajas más destacables de esta técnica es que incrementa el uso de recursos en más de un 300 %. También cabe la posibilidad de que la salida discrepante sea la realmente correcta (y que, por tanto, las otras dos sean incorrectas), aunque este caso es bastante improbable. Uno de los problemas que no se ha analizado con profundidad en la bibliografía es el problema de la sincronización de procesadores soft-core en sistemas TMR (o de mayor nivel de redundancia). Dicho problema reside en que, si tras un fallo se inutiliza uno de los procesadores y el sistema continúa funcionando con el resto de procesadores, una vez reparado el procesador fallido éste necesita sincronizar su contexto al nuevo estado del sistema. Una práctica bastante común en la implementación de sistemas redundantes es combinarlos con la técnica conocida como scrubbing. Esta técnica basada en la reconfiguración parcial dinámica, consiste en sobrescribir periódicamente el bitstream con una copia libre de errores apropiadamente guardada. Gracias a ella, es posible corregir los errores enmascarados por el uso de algunas técnicas de endurecimiento como la redundancia hardware. Esta copia libre de errores suele omitir los bits del bitstream correspondientes a la memoria de usuario, por lo que solo actualiza los bits relacionados con la configuración de la FPGA. Por ello, a esta técnica también se la conoce como configuration scrubbing. En toda la literatura consultada se ha detectado un vacío en cuanto a técnicas que propongan estrategias de scrubbing para la memoria de usuario. Con el objetivo de proponer alternativas innovadoras en el terreno de la tolerancia a fallos para procesadores softcore, en este trabajo de investigación se han desarrollado varias técnicas y flujos de diseño para manejar los datos de usuario a través del bitstream, pudiendo leer, escribir o copiar la información de registros o de memorias implementadas en bloques RAMs de forma autónoma. Así mismo se ha desarrollado un abanico de propuestas tanto como para estrategias lockstep como para la sincronización de sistemas TMR, de las cuales varias hacen uso de las técnicas desarrolladas para manejar las memorias de usuario a través del bitstream. Estas últimas técnicas tienen en común la minimización de utilización de recursos respecto a las estrategias tradicionales. De forma similar, se proponen dos alternativas adicionales basadas en dichas técnicas: una propuesta de scrubbing para las memorias de usuario y una para la recuperación de información en memorias implementadas en bloques RAM cuyas interfaces hayan sido inutilizadas por SEEs.Todas las propuestas han sido validadas en hardware utilizando una FPGA de Xilinx, la empresa líder en fabricación de dispositivos reconfigurables. De esta forma se proporcionan resultados sobre los impactos de las técnicas propuestas en términos de utilización de recursos, consumos energéticos y máximas frecuencias alcanzables
    corecore