14 research outputs found

    Techniques d'abstraction pour l'analyse et la mitigation des effets dus à la radiation

    Get PDF
    The main objective of this thesis is to develop techniques that can beused to analyze and mitigate the effects of radiation-induced soft errors in industrialscale integrated circuits. To achieve this goal, several methods have been developedbased on analyzing the design at higher levels of abstraction. These techniquesaddress both sequential and combinatorial SER.Fault-injection simulations remain the primary method for analyzing the effectsof soft errors. In this thesis, techniques which significantly speed-up fault-injectionsimulations are presented. Soft errors in flip-flops are typically mitigated by selectivelyreplacing the most critical flip-flops with hardened implementations. Selectingan optimal set to harden is a compute intensive problem and the second contributionconsists of a clustering technique which significantly reduces the number offault-injections required to perform selective mitigation.In terrestrial applications, the effect of soft errors in combinatorial logic hasbeen fairly small. It is known that this effect is growing, yet there exist few techniqueswhich can quickly estimate the extent of combinatorial SER for an entireintegrated circuit. The third contribution of this thesis is a hierarchical approachto combinatorial soft error analysis.Systems-on-chip are often developed by re-using design-blocks that come frommultiple sources. In this context, there is a need to develop and exchange reliabilitymodels. The final contribution of this thesis consists of an application specificmodeling language called RIIF (Reliability Information Interchange Format). Thislanguage is able to model how faults at the gate-level propagate up to the block andchip-level. Work is underway to standardize the RIIF modeling language as well asto extend it beyond modeling of radiation-induced failures.In addition to the main axis of research, some tangential topics were studied incollaboration with other teams. One of these consisted in the development of a novelapproach for protecting ternary content addressable memories (TCAMs), a specialtype of memory important in networking applications. The second supplementalproject resulted in an algorithm for quickly generating approximate redundant logicwhich can protect combinatorial networks against permanent faults. Finally anapproach for reducing the detection time for errors in the configuration RAM forField-Programmable Gate-Arrays (FPGAs) was outlined.Les effets dus à la radiation peuvent provoquer des pannes dans des circuits intégrés. Lorsqu'une particule subatomique, fait se déposer une charge dans les régions sensibles d'un transistor cela provoque une impulsion de courant. Cette impulsion peut alors engendrer l'inversion d'un bit ou se propager dans un réseau de logique combinatoire avant d'être échantillonnée par une bascule en aval.Selon l'état du circuit au moment de la frappe de la particule et selon l'application, cela provoquera une panne observable ou non. Parmi les événements induits par la radiation, seule une petite portion génère des pannes. Il est donc essentiel de déterminer cette fraction afin de prédire la fiabilité du système. En effet, les raisons pour lesquelles une perturbation pourrait être masquée sont multiples, et il est de plus parfois difficile de préciser ce qui constitue une erreur. A cela s'ajoute le fait que les circuits intégrés comportent des milliards de transistors. Comme souvent dans le contexte de la conception assisté par ordinateur, les approches hiérarchiques et les techniques d'abstraction permettent de trouver des solutions.Cette thèse propose donc plusieurs nouvelles techniques pour analyser les effets dus à la radiation. La première technique permet d'accélérer des simulations d'injections de fautes en détectant lorsqu'une faute a été supprimée du système, permettant ainsi d'arrêter la simulation. La deuxième technique permet de regrouper en ensembles les éléments d'un circuit ayant une fonction similaire. Ensuite, une analyse au niveau des ensemble peut être faite, identifiant ainsi ceux qui sont les plus critiques et qui nécessitent donc d'être durcis. Le temps de calcul est ainsi grandement réduit.La troisième technique permet d'analyser les effets des fautes transitoires dans les circuits combinatoires. Il est en effet possible de calculer à l'avance la sensibilité à des fautes transitoires de cellules ainsi que les effets de masquage dans des blocs fréquemment utilisés. Ces modèles peuvent alors être combinés afin d'analyser la sensibilité de grands circuits. La contribution finale de cette thèse consiste en la définition d'un nouveau langage de modélisation appelé RIIF (Reliability Information Ineterchange Format). Ce langage permet de décrire le taux des fautes dans des composants simples en fonction de leur environnement de fonctionnement. Ces composants simples peuvent ensuite être combinés permettant ainsi de modéliser la propagation de leur fautes vers des pannes au niveau système. En outre, l'utilisation d'un langage standard facilite l'échange de données de fiabilité entre les partenaires industriels.Au-delà des contributions principales, cette thèse aborde aussi des techniques permettant de protéger des mémoires associatives ternaires (TCAMs). Les approches classiques de protection (codes correcteurs) ne s'appliquent pas directement. Une des nouvelles techniques proposées consiste à utiliser une structure de données qui peut détecter, d'une manière statistique, quand le résultat n'est pas correct. La probabilité de détection peut être contrôlée par le nombre de bits alloués à cette structure. Une autre technique consiste à utiliser un détecteur de courant embarqué (BICS) afin de diriger un processus de fond directement vers le région touchée par une erreur. La contribution finale consiste en un algorithme qui permet de synthétiser de la logique combinatoire afin de protéger des circuits combinatoires contre les fautes transitoires.Dans leur ensemble, ces techniques facilitent l'analyse des erreurs provoquées par les effets dus à la radiation dans les circuits intégrés, en particulier pour les très grands circuits composés de blocs provenant de divers fournisseurs. Des techniques pour mieux sélectionner les bascules/flip-flops à durcir et des approches pour protéger des TCAMs ont étés étudiées

    Software-Driven and Virtualized Architectures for Scalable 5G Networks

    Full text link
    In this dissertation, we argue that it is essential to rearchitect 4G cellular core networks–sitting between the Internet and the radio access network–to meet the scalability, performance, and flexibility requirements of 5G networks. Today, there is a growing consensus among operators and research community that software-defined networking (SDN), network function virtualization (NFV), and mobile edge computing (MEC) paradigms will be the key ingredients of the next-generation cellular networks. Motivated by these trends, we design and optimize three core network architectures, SoftMoW, SoftBox, and SkyCore, for different network scales, objectives, and conditions. SoftMoW provides global control over nationwide core networks with the ultimate goal of enabling new routing and mobility optimizations. SoftBox attempts to enhance policy enforcement in statewide core networks to enable low-latency, signaling-efficient, and customized services for mobile devices. Sky- Core is aimed at realizing a compact core network for citywide UAV-based radio networks that are going to serve first responders in the future. Network slicing techniques make it possible to deploy these solutions on the same infrastructure in parallel. To better support mobility and provide verifiable security, these architectures can use an addressing scheme that separates network locations and identities with self-certifying, flat and non-aggregatable address components. To benefit the proposed architectures, we designed a high-speed and memory-efficient router, called Caesar, for this type of addressing schemePHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146130/1/moradi_1.pd

    Build framework and runtime abstraction for partial reconfiguration on FPGA SoCs

    Get PDF
    Growth in edge computing has increased the requirement for edge systems to process larger volumes of real-time data, such as with image processing and machine learning; which are increasingly demanding of computing resources. Offloading tasks to the cloud provides some relief but is network dependant, high latency and expensive. Alternative architectures such as GPUs provide higher performance acceleration for this type of data processing but trade processing performance for an increase in power consumption. Another option is the Field Programmable Gate Array; a flexible matrix of logic that can be configured by a designer to provide a highly optimised computation path for incoming data. There are drawbacks; the FPGA design process is complex, the domain is dissimilar to software and the tools require bespoke expertise. A designer must manage the hardware to software paradigm introduced when tightly-coupled with general purpose processor. Advanced features, such as the ability to partially reconfigure (PR) specific regions of the FPGA, further increase this complexity. This thesis presents theory and demonstration of custom frameworks and tools for increasing abstraction and simplifying control over PR applications. We present mechanisms for networked PR; a mechanism for bypassing the traditional software networking stack to trigger PR with reduced latency and increased determinism. We developed a build framework for automating the end-to-end PR design process for Linux based systems as well as an abstracted runtime for managing the resulting applications. Finally, we take expand on this work and present a high level abstraction for PR on cyber physical systems, with a demonstration using the Robot Operating System. This work is released as open source contributions, designed to enable future PR research

    GPU Accelerated protocol analysis for large and long-term traffic traces

    Get PDF
    This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF
    corecore