9 research outputs found

    Yield Enhancement of Digital Microfluidics-Based Biochips Using Space Redundancy and Local Reconfiguration

    Full text link
    As microfluidics-based biochips become more complex, manufacturing yield will have significant influence on production volume and product cost. We propose an interstitial redundancy approach to enhance the yield of biochips that are based on droplet-based microfluidics. In this design method, spare cells are placed in the interstitial sites within the microfluidic array, and they replace neighboring faulty cells via local reconfiguration. The proposed design method is evaluated using a set of concurrent real-life bioassays.Comment: Submitted on behalf of EDAA (http://www.edaa.com/

    Fault tolerant methods for reliability in FPGAs

    Full text link

    Evaluation of a Field Programmable Gate Array Circuit Reconfiguration System

    Get PDF
    This research implements a circuit reconfiguration system (CRS) to reconfigure a field programmable gate array (FPGA) in response to a faulty configurable logic block (CLB). It is assumed that the location of the fault is known and the CLB is moved according to one of four replacement methods: column left, column right, row up, and row down. Partial reconfiguration of the FPGA is done through the Joint Test Action Group (JTAG) port to produce the desired logic block movement. The time required to accomplish the reconfiguration is measured for each method in both clear and congested areas of the FPGA. The measured data indicate that there is no consistently better replacement method, regardless of the circuit congestion or location within the FPGA. Thus, given a specific location in the FPGA, there is no preferred replacement method that will result in the lowest reconfiguration time

    Security Aspects of Printed Electronics Applications

    Get PDF
    Gedruckte Elektronik (Printed Electronics (PE)) ist eine neu aufkommende Technologie welche komplementĂ€r zu konventioneller Elektronik eingesetzt wird. Dessen einzigartigen Merkmale fĂŒhrten zu einen starken Anstieg von Marktanteilen, welche 2010 \$6 Milliarden betrugen, \$41 Milliarden in 2019 und in 2027 geschĂ€tzt \$153 Milliarden. Gedruckte Elektronik kombiniert additive Technologien mit funktionalen Tinten um elektronische Komponenten aus verschiedenen Materialien direkt am Verwendungsort, kosteneffizient und umweltfreundlich herzustellen. Die dabei verwendeten Substrate können flexibel, leicht, transparent, großflĂ€chig oder implantierbar sein. Dadurch können mit gedruckter Elektronik (noch) visionĂ€re Anwendungen wie Smart-Packaging, elektronische Einmalprodukte, Smart Labels oder digitale Haut realisiert werden. Um den Fortschritt von gedruckten Elektronik-Technologien voranzutreiben, basierten die meisten Optimierungen hauptsĂ€chlich auf der Erhöhung von Produktionsausbeute, ReliabilitĂ€t und Performance. Jedoch wurde auch die Bedeutung von Sicherheitsaspekten von Hardware-Plattformen in den letzten Jahren immer mehr in den Vordergrund gerĂŒckt. Da realisierte Anwendungen in gedruckter Elektronik vitale FunktionalitĂ€ten bereitstellen können, die sensible Nutzerdaten beinhalten, wie zum Beispiel in implantierten GerĂ€ten und intelligenten Pflastern zur GesundheitsĂŒberwachung, fĂŒhren SicherheitsmĂ€ngel und fehlendes Produktvertrauen in der Herstellungskette zu teils ernsten und schwerwiegenden Problemen. Des Weiteren, wegen den charakteristischen Merkmalen von gedruckter Elektronik, wie zum Beispiel additive Herstellungsverfahren, hohe StrukturgrĂ¶ĂŸe, wenige Schichten und begrenzten Produktionsschritten, ist gedruckte Hardware schon per se anfĂ€llig fĂŒr hardware-basierte Attacken wie Reverse-Engineering, ProduktfĂ€lschung und Hardware-Trojanern. DarĂŒber hinaus ist die Adoption von Gegenmaßnahmen aus konventionellen Technologien unpassend und ineffizient, da solche zu extremen MehraufwĂ€nden in der kostengĂŒnstigen Fertigung von gedruckter Elektronik fĂŒhren wĂŒrden. Aus diesem Grund liefert diese Arbeit eine Technologie-spezifische Bewertung von Bedrohungen auf der Hardware-Ebene und dessen Gegenmaßnahmen in der Form von Ressourcen-beschrĂ€nkten Hardware-Primitiven, um die Produktionskette und FunktionalitĂ€ten von gedruckter Elektronik-Anwendungen zu schĂŒtzen. Der erste Beitrag dieser Dissertation ist ein vorgeschlagener Ansatz um gedruckte Physical Unclonable Functions (pPUF) zu entwerfen, welche SicherheitsschlĂŒssel bereitstellen um mehrere sicherheitsrelevante Gegenmaßnahmen wie Authentifizierung und FingerabdrĂŒcke zu ermöglichen. ZusĂ€tzlich optimieren wir die multi-bit pPUF-Designs um den FlĂ€chenbedarf eines 16-bit-SchlĂŒssels-Generators um 31\% zu verringern. Außerdem entwickeln wir ein Analyse-Framework basierend auf Monte Carlo-Simulationen fĂŒr pPUFs, mit welchem wir Simulationen und Herstellungs-basierte Analysen durchfĂŒhren können. Unsere Ergebnisse haben gezeigt, dass die pPUFs die notwendigen Eigenschaften besitzen um erfolgreich als Sicherheitsanwendung eingesetzt zu werden, wie Einzigartigkeit der Signatur und ausreichende Robustheit. Der Betrieb der gedruckten pPUFs war möglich bis zu sehr geringen Betriebsspannungen von nur 0.5 V. Im zweiten Beitrag dieser Arbeit stellen wir einen kompakten Entwurf eines gedruckten physikalischen Zufallsgenerator vor (True Random Number Generator (pTRNG)), welcher unvorhersehbare SchlĂŒssel fĂŒr kryptographische Funktionen und zufĂ€lligen "Authentication Challenges" generieren kann. Der pTRNG Entwurf verbessert Prozess-Variationen unter Verwendung von einer Anpassungsmethode von gedruckten WiderstĂ€nden, ermöglicht durch die individuelle Konfigurierbarkeit von gedruckten Schaltungen, um die generierten Bits nur von Zufallsrauschen abhĂ€ngig zu machen, und damit ein echtes Zufallsverhalten zu erhalten. Die Simulationsergebnisse legen nahe, dass die gesamten Prozessvariationen des TRNGs um das 110-fache verbessert werden, und der zufallsgenerierte Bitstream der TRNGs die "National Institute of Standards and Technology Statistical Test Suit"-Tests bestanden hat. Auch hier können wir nachweisen, dass die Betriebsspannungen der TRNGs von mehreren Volt zu nur 0.5 V lagen, wie unsere Charakterisierungsergebnisse der hergestellten TRNGs aufgezeigt haben. Der dritte Beitrag dieser Dissertation ist die Beschreibung der einzigartigen Merkmale von Schaltungsentwurf und Herstellung von gedruckter Elektronik, welche sehr verschieden zu konventionellen Technologien ist, und dadurch eine neuartige Reverse-Engineering (RE)-Methode notwendig macht. HierfĂŒr stellen wir eine robuste RE-Methode vor, welche auf Supervised-Learning-Algorithmen fĂŒr gedruckte Schaltungen basiert, um die VulnerabilitĂ€t gegenĂŒber RE-Attacken zu demonstrieren. Die RE-Ergebnisse zeigen, dass die vorgestellte RE-Methode auf zahlreiche gedruckte Schaltungen ohne viel KomplexitĂ€t oder teure Werkzeuge angewandt werden kann. Der letzte Beitrag dieser Arbeit ist ein vorgeschlagenes Konzept fĂŒr eine "one-time programmable" gedruckte Look-up Table (pLUT), welche beliebige digitale Funktionen realisieren kann und Gegenmaßnahmen unterstĂŒtzt wie Camouflaging, Split-Manufacturing und Watermarking um Attacken auf der Hardware-Ebene zu verhindern. Ein Vergleich des vorgeschlagenen pLUT-Konzepts mit existierenden Lösungen hat gezeigt, dass die pLUT weniger FlĂ€chen-bedarf, geringere worst-case Verzögerungszeiten und Leistungsverbrauch hat. Um die Konfigurierbarkeit der vorgestellten pLUT zu verifizieren, wurde es simuliert, hergestellt und programmiert mittels Tintenstrahl-gedruckter elektrisch leitfĂ€higer Tinte um erfolgreich Logik-Gatter wie XNOR, XOR und AND zu realisieren. Die Simulation und Charakterisierungsergebnisse haben die erfolgreiche FunktionalitĂ€t der pLUT bei Betriebsspannungen von nur 1 V belegt

    Characterisation and mitigation of long-term degradation effects in programmable logic

    No full text
    Reliability has always been an issue in silicon device engineering, but until now it has been managed by the carefully tuned fabrication process. In the future the underlying physical limitations of silicon-based electronics, plus the practical challenges of manufacturing with such complexity at such a small scale, will lead to a crunch point where transistor-level reliability must be forfeited to continue achieving better productivity. Field-programmable gate arrays (FPGAs) are built on state-of-the-art silicon processes, but it has been recognised for some time that their distinctive characteristics put them in a favourable position over application-specific integrated circuits in the face of the reliability challenge. The literature shows how a regular structure, interchangeable resources and an ability to reconfigure can all be exploited to detect, locate, and overcome degradation and keep an FPGA application running. To fully exploit these characteristics, a better understanding is needed of the behavioural changes that are seen in the resources that make up an FPGA under ageing. Modelling is an attractive approach to this and in this thesis the causes and effects are explored of three important degradation mechanisms. All are shown to have an adverse affect on FPGA operation, but their characteristics show novel opportunities for ageing mitigation. Any modelling exercise is built on assumptions and so an empirical method is developed for investigating ageing on hardware with an accelerated-life test. Here, experiments show that timing degradation due to negative-bias temperature instability is the dominant process in the technology considered. Building on simulated and experimental results, this work also demonstrates a variety of methods for increasing the lifetime of FPGA lookup tables. The pre-emptive measure of wear-levelling is investigated in particular detail, and it is shown by experiment how di fferent reconfiguration algorithms can result in a significant reduction to the rate of degradation

    Physically-Adaptive Computing via Introspection and Self-Optimization in Reconfigurable Systems.

    Full text link
    Digital electronic systems typically must compute precise and deterministic results, but in principle have flexibility in how they compute. Despite the potential flexibility, the overriding paradigm for more than 50 years has been based on fixed, non-adaptive inte-grated circuits. This one-size-fits-all approach is rapidly losing effectiveness now that technology is advancing into the nanoscale. Physical variation and uncertainty in com-ponent behavior are emerging as fundamental constraints and leading to increasingly sub-optimal fault rates, power consumption, chip costs, and lifetimes. This dissertation pro-poses methods of physically-adaptive computing (PAC), in which reconfigurable elec-tronic systems sense and learn their own physical parameters and adapt with fine granu-larity in the field, leading to higher reliability and efficiency. We formulate the PAC problem and provide a conceptual framework built around two major themes: introspection and self-optimization. We investigate how systems can efficiently acquire useful information about their physical state and related parameters, and how systems can feasibly re-implement their designs on-the-fly using the information learned. We study the role not only of self-adaptation—where the above two tasks are performed by an adaptive system itself—but also of assisted adaptation using a remote server or peer. We introduce low-cost methods for sensing regional variations in a system, including a flexible, ultra-compact sensor that can be embedded in an application and implemented on field-programmable gate arrays (FPGAs). An array of such sensors, with only 1% to-tal overhead, can be employed to gain useful information about circuit delays, voltage noise, and even leakage variations. We present complementary methods of regional self-optimization, such as finding a design alternative that best fits a given system region. We propose a novel approach to characterizing local, uncorrelated variations. Through in-system emulation of noise, previously hidden variations in transient fault sus-ceptibility are uncovered. Correspondingly, we demonstrate practical methods of self-optimization, such as local re-placement, informed by the introspection data. Forms of physically-adaptive computing are strongly needed in areas such as com-munications infrastructure, data centers, and space systems. This dissertation contributes practical methods for improving PAC costs and benefits, and promotes a vision of re-sourceful, dependable digital systems at unimaginably-fine physical scales.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78922/1/kzick_1.pd

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    The yield enhancement of field-programmable gate arrays

    No full text
    The fine granularity and reconfigurable nature of field-programmable gate arrays (FPGA's) suggest that defect-tolerant methods can be readily applied to these devices in order to increase their maximum economic sizes, through increased yield. This paper identifies the inability to contain faults within single cells and the need for fast reconfiguration as the key obstacles to obtaining a significant increase in yield. Monte Carlo defect modeling of the photolithographic layers of VLSI FPGA's is used as a foundation for the yield modeling of various defect-tolerant architectures. Results suggest that a medium-grain architecture is the best solution, offering a substantial increase in size without significant side effects. This architecture is shown to produce greater gate densities than the alternative approach of realizing ultralarge scale FPGA's-multichip modules.<
    corecore