72 research outputs found
Robust low-power digital circuit design in nano-CMOS technologies
Device scaling has resulted in large scale integrated, high performance, low-power, and low cost systems. However the move towards sub-100 nm technology nodes has increased variability in device characteristics due to large process variations. Variability has severe implications on digital circuit design by causing timing uncertainties in combinational circuits, degrading yield and reliability of memory elements, and increasing power density due to slow scaling of supply voltage. Conventional design methods add large pessimistic safety margins to mitigate increased variability, however, they incur large power and performance loss as the combination of worst cases occurs very rarely.
In-situ monitoring of timing failures provides an opportunity to dynamically tune safety margins in proportion to on-chip variability that can significantly minimize power and performance losses. We demonstrated by simulations two delay sensor designs to detect timing failures in advance that can be coupled with different compensation techniques such as voltage scaling, body biasing, or frequency scaling to avoid actual timing failures. Our simulation results using 45 nm and 32 nm technology BSIM4 models indicate significant reduction in total power consumption under temperature and statistical variations. Future work involves using dual sensing to avoid useless voltage scaling that incurs a speed loss.
SRAM cache is the first victim of increased process variations that requires handcrafted design to meet area, power, and performance requirements. We have proposed novel 6 transistors (6T), 7 transistors (7T), and 8 transistors (8T)-SRAM cells that enable variability tolerant and low-power SRAM cache designs. Increased sense-amplifier offset voltage due to device mismatch arising from high variability increases delay and power consumption of SRAM design. We have proposed two novel design techniques to reduce offset voltage dependent delays providing a high speed low-power SRAM design. Increasing leakage currents in nano-CMOS technologies pose a major challenge to a low-power reliable design. We have investigated novel segmented supply voltage architecture to reduce leakage power of the SRAM caches since they occupy bulk of the total chip area and power. Future work involves developing leakage reduction methods for the combination logic designs including SRAM peripherals
Dependable Embedded Systems
This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
Embracing Low-Power Systems with Improvement in Security and Energy-Efficiency
As the economies around the world are aligning more towards usage of computing systems, the global energy demand for computing is increasing rapidly. Additionally, the boom in AI based applications and services has already invited the pervasion of specialized computing hardware architectures for AI (accelerators). A big chunk of research in the industry and academia is being focused on providing energy efficiency to all kinds of power hungry computing architectures. This dissertation adds to these efforts.
Aggressive voltage underscaling of chips is one the effective low power paradigms of providing energy efficiency. This dissertation identifies and deals with the reliability and performance problems associated with this paradigm and innovates novel energy efficient approaches. Specifically, the properties of a low power security primitive have been improved and, higher performance has been unlocked in an AI accelerator (Google TPU) in an aggressively voltage underscaled environment. And, novel power saving opportunities have been unlocked by characterizing the usage pattern of a baseline TPU with rigorous mathematical analysis
Recommended from our members
Probabilistic design for emerging memory and nanometer-scale logic
As semiconductor technology has scaled down, the impact of stochastic behavior in very large scale integrated circuits (VLSI) has become an ever-more important concern. This dissertation investigates two distinct classes of problems that require the use of probabilistic methods and models: (1) Modeling and exploiting stochastic behavior in advanced memory technologies; (2) Probabilistic modeling of faults due to on-chip voltage variation.
This dissertation first investigates the unique physics-level stochasticity of spin-transfer torque magnetic RAM (STT-RAM). The write process of STT-RAM is stochastic: specifically, the write time of a bitcell varies significantly. The wors-tcase approach, which uses the longest write pulse duration, guarantees a successful write; however, it introduces significant energy overhead due to excessive margins since the average write pulse duration is far shorter than the worst-case pulse duration. This dissertation develops novel circuit techniques to exploit the stochastic properties of STT-RAM write operation for energy savings by moving away from the worst-case approach to dynamic strategies while maintaining the required low error rate. The first contribution is a variable energy write (VEW) architecture that effectively exploits the wide distribution of write time to greatly reduce energy via a mechanism that checks the instantaneous state of the bitcell and deactivates the write current once the correct value has registered. The second contribution is a multiple attempt write (MAW) strategy that utilizes the asymptotic temporal stochastic independence of repeated switching events to achieve a dramatic reduction in energy. The proposed architectures are evaluated using a compact STT-RAM cell model. Analysis indicates that VEW succeeded in reducing the write energy by 94.7% with approximately 1% relative area overhead under an efficient design methodology compared with the conventional designs relying on the worst case approach. MAW reduced the overall write energy by 94.6% with approximately 0.05% relative area overhead.
This dissertation then addresses the problem of probabilistic modeling of faults due to on-chip voltage variations. The power supply voltage variation can increase gate delay, resulting in timing faults on near-critical paths. These low-level faults ultimately propagate to architecture and application levels, often leading to critical system failures. Developing an accurate fault model and injection tool that generates and propagates faults from circuit- to gate-level is important for accurately predicting the resulting system failures. This is challenging since the model needs to accurately capture the physical characteristics at the circuit level that define the likelihood of a fault and use that information to guide the injection with the proper probability. At the same time, the analysis and fault injections need to be computationally manageable to allow analysis of realistic systems under realistic workloads. The conventional fault models rely on either Monte Carlo sampling or time-consuming runtime simulation using the worst-case voltage drop. To overcome simulation overheads of runtime circuit-level simulation, a novel two-phase approach is proposed. The main idea is that circuit characterization can be done before simulation. The result of pre-characterization is used at runtime via a form of look-up to enable gate-level efficiency. The two-phase methodology is time-efficient but may require high memory unless the look-up tables are carefully optimized. This dissertation also develops the fault probability estimation based on workload-specific voltage distribution, rather than a fixed worst-case voltage. The proposed methodology is implemented on an OpenSPARC design targeting on a 32nm technology node. Analysis indicates the proposed fault modeling and injection flow reduces runtime overhead by 24X compared to the previously best-known gate-level fault simulator while having circuit level accuracy.Electrical and Computer Engineerin
Reliable Low-Power High Performance Spintronic Memories
Moores Gesetz folgend, ist es der Chipindustrie in den letzten fünf Jahrzehnten gelungen, ein
explosionsartiges Wachstum zu erreichen. Dies hatte ebenso einen exponentiellen Anstieg der
Nachfrage von Speicherkomponenten zur Folge, was wiederum zu speicherlastigen Chips in
den heutigen Computersystemen führt. Allerdings stellen traditionelle on-Chip Speichertech-
nologien wie Static Random Access Memories (SRAMs), Dynamic Random Access Memories
(DRAMs) und Flip-Flops eine Herausforderung in Bezug auf Skalierbarkeit, Verlustleistung
und Zuverlässigkeit dar. Eben jene Herausforderungen und die überwältigende Nachfrage
nach höherer Performanz und Integrationsdichte des on-Chip Speichers motivieren Forscher,
nach neuen nichtflüchtigen Speichertechnologien zu suchen. Aufkommende spintronische Spe-
ichertechnologien wie Spin Orbit Torque (SOT) und Spin Transfer Torque (STT) erhielten
in den letzten Jahren eine hohe Aufmerksamkeit, da sie eine Reihe an Vorteilen bieten. Dazu
gehören Nichtflüchtigkeit, Skalierbarkeit, hohe Beständigkeit, CMOS Kompatibilität und Unan-
fälligkeit gegenüber Soft-Errors. In der Spintronik repräsentiert der Spin eines Elektrons dessen
Information. Das Datum wird durch die Höhe des Widerstandes gespeichert, welche sich durch
das Anlegen eines polarisierten Stroms an das Speichermedium verändern lässt. Das Prob-
lem der statischen Leistung gehen die Speichergeräte sowohl durch deren verlustleistungsfreie
Eigenschaft, als auch durch ihr Standard- Aus/Sofort-Ein Verhalten an. Nichtsdestotrotz sind
noch andere Probleme, wie die hohe Zugriffslatenz und die Energieaufnahme zu lösen, bevor
sie eine verbreitete Anwendung finden können. Um diesen Problemen gerecht zu werden, sind
neue Computerparadigmen, -architekturen und -entwurfsphilosophien notwendig.
Die hohe Zugriffslatenz der Spintroniktechnologie ist auf eine vergleichsweise lange Schalt-
dauer zurückzuführen, welche die von konventionellem SRAM übersteigt. Des Weiteren ist auf
Grund des stochastischen Schaltvorgangs der Speicherzelle und des Einflusses der Prozessvari-
ation ein nicht zu vernachlässigender Zeitraum dafür erforderlich. In diesem Zeitraum wird ein
konstanter Schreibstrom durch die Bitzelle geleitet, um den Schaltvorgang zu gewährleisten.
Dieser Vorgang verursacht eine hohe Energieaufnahme. Für die Leseoperation wird gleicher-
maßen ein beachtliches Zeitfenster benötigt, ebenfalls bedingt durch den Einfluss der Prozess-
variation. Dem gegenüber stehen diverse Zuverlässigkeitsprobleme. Dazu gehören unter An-
derem die Leseintereferenz und andere Degenerationspobleme, wie das des Time Dependent Di-
electric Breakdowns (TDDB). Diese Zuverlässigkeitsprobleme sind wiederum auf die benötigten
längeren Schaltzeiten zurückzuführen, welche in der Folge auch einen über längere Zeit an-
liegenden Lese- bzw. Schreibstrom implizieren. Es ist daher notwendig, sowohl die Energie, als
auch die Latenz zur Steigerung der Zuverlässigkeit zu reduzieren, um daraus einen potenziellen
Kandidaten für ein on-Chip Speichersystem zu machen.
In dieser Dissertation werden wir Entwurfsstrategien vorstellen, welche das Ziel verfolgen,
die Herausforderungen des Cache-, Register- und Flip-Flop-Entwurfs anzugehen. Dies erre-
ichen wir unter Zuhilfenahme eines Cross-Layer Ansatzes. Für Caches entwickelten wir ver-
schiedene Ansätze auf Schaltkreisebene, welche sowohl auf der Speicherarchitekturebene, als
auch auf der Systemebene in Bezug auf Energieaufnahme, Performanzsteigerung und Zuver-
lässigkeitverbesserung evaluiert werden. Wir entwickeln eine Selbstabschalttechnik, sowohl für
die Lese-, als auch die Schreiboperation von Caches. Diese ist in der Lage, den Abschluss der
entsprechenden Operation dynamisch zu ermitteln. Nachdem der Abschluss erkannt wurde,
wird die Lese- bzw. Schreiboperation sofort gestoppt, um Energie zu sparen. Zusätzlich
limitiert die Selbstabschalttechnik die Dauer des Stromflusses durch die Speicherzelle, was
wiederum das Auftreten von TDDB und Leseinterferenz bei Schreib- bzw. Leseoperationen re-
duziert. Zur Verbesserung der Schreiblatenz heben wir den Schreibstrom an der Bitzelle an, um den magnetischen Schaltprozess zu beschleunigen. Um registerbankspezifische Anforderungen
zu berücksichtigen, haben wir zusätzlich eine Multiport-Speicherarchitektur entworfen, welche
eine einzigartige Eigenschaft der SOT-Zelle ausnutzt, um simultan Lese- und Schreiboperatio-
nen auszuführen. Es ist daher möglich Lese/Schreib- Konfilkte auf Bitzellen-Ebene zu lösen,
was sich wiederum in einer sehr viel einfacheren Multiport- Registerbankarchitektur nieder-
schlägt.
Zusätzlich zu den Speicheransätzen haben wir ebenfalls zwei Flip-Flop-Architekturen vorgestellt.
Die erste ist eine nichtflüchtige non-Shadow Flip-Flop-Architektur, welche die Speicherzelle als
aktive Komponente nutzt. Dies ermöglicht das sofortige An- und Ausschalten der Versorgungss-
pannung und ist daher besonders gut für aggressives Powergating geeignet. Alles in Allem zeigt
der vorgestellte Flip-Flop-Entwurf eine ähnliche Timing-Charakteristik wie die konventioneller
CMOS Flip-Flops auf. Jedoch erlaubt er zur selben Zeit eine signifikante Reduktion der statis-
chen Leistungsaufnahme im Vergleich zu nichtflüchtigen Shadow- Flip-Flops. Die zweite ist eine
fehlertolerante Flip-Flop-Architektur, welche sich unanfällig gegenüber diversen Defekten und
Fehlern verhält. Die Leistungsfähigkeit aller vorgestellten Techniken wird durch ausführliche
Simulationen auf Schaltkreisebene verdeutlicht, welche weiter durch detaillierte Evaluationen
auf Systemebene untermauert werden. Im Allgemeinen konnten wir verschiedene Techniken en-
twickeln, die erhebliche Verbesserungen in Bezug auf Performanz, Energie und Zuverlässigkeit
von spintronischen on-Chip Speichern, wie Caches, Register und Flip-Flops erreichen
Cross-Layer Resiliency Modeling and Optimization: A Device to Circuit Approach
The never ending demand for higher performance and lower power consumption pushes the VLSI industry to further scale the technology down. However, further downscaling of technology at nano-scale leads to major challenges. Reduced reliability is one of them, arising from multiple sources e.g. runtime variations, process variation, and transient errors. The objective of this thesis is to tackle unreliability with a cross layer approach from device up to circuit level
Design for prognostics and security in field programmable gate arrays (FPGAs).
There is an evolutionary progression of Field Programmable Gate Arrays (FPGAs)
toward more complex and high power density architectures such as Systems-on-
Chip (SoC) and Adaptive Compute Acceleration Platforms (ACAP). Primarily, this is
attributable to the continual transistor miniaturisation and more innovative and
efficient IC manufacturing processes. Concurrently, degradation mechanism of Bias
Temperature Instability (BTI) has become more pronounced with respect to its
ageing impact. It could weaken the reliability of VLSI devices, FPGAs in particular
due to their run-time reconfigurability. At the same time, vulnerability of FPGAs to
device-level attacks in the increasing cyber and hardware threat environment is also
quadrupling as the susceptible reliability realm opens door for the rogue elements to
intervene. Insertion of highly stealthy and malicious circuitry, called hardware
Trojans, in FPGAs is one of such malicious interventions. On the one hand where
such attacks/interventions adversely affect the security ambit of these devices, they
also undermine their reliability substantially. Hitherto, the security and reliability are
treated as two separate entities impacting the FPGA health. This has resulted in
fragmented solutions that do not reflect the true state of the FPGA operational and
functional readiness, thereby making them even more prone to hardware attacks.
The recent episodes of Spectre and Meltdown vulnerabilities are some of the key
examples. This research addresses these concerns by adopting an integrated
approach and investigating the FPGA security and reliability as two inter-dependent
entities with an additional dimension of health estimation/ prognostics. The design
and implementation of a small footprint frequency and threshold voltage-shift
detection sensor, a novel hardware Trojan, and an online transistor dynamic scaling
circuitry present a viable FPGA security scheme that helps build a strong
microarchitectural level defence against unscrupulous hardware attacks. Augmented
with an efficient Kernel-based learning technique for FPGA health
estimation/prognostics, the optimal integrated solution proves to be more
dependable and trustworthy than the prevalent disjointed approach.Samie, Mohammad (Associate)PhD in Transport System
Gestión de jerarquías de memoria híbridas a nivel de sistema
Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadoras y Automática y de Ku Leuven, Arenberg Doctoral School, Faculty of Engineering Science, leída el 11/05/2017.In electronics and computer science, the term ‘memory’ generally refers to devices that are used to store information that we use in various appliances ranging from our PCs to all hand-held devices, smart appliances etc. Primary/main memory is used for storage systems that function at a high speed (i.e. RAM). The primary memory is often associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary memory but also other purposes in computers and other digital electronic devices. The secondary/auxiliary memory, in comparison provides program and data storage that is slower to access but offers larger capacity. Examples include external hard drives, portable flash drives, CDs, and DVDs. These devices and media must be either plugged in or inserted into a computer in order to be accessed by the system. Since secondary storage technology is not always connected to the computer, it is commonly used for backing up data. The term storage is often used to describe secondary memory. Secondary memory stores a large amount of data at lesser cost per byte than primary memory; this makes secondary storage about two orders of magnitude less expensive than primary storage. There are two main types of semiconductor memory: volatile and nonvolatile. Examples of non-volatile memory are ‘Flash’ memory (sometimes used as secondary, sometimes primary computer memory) and ROM/PROM/EPROM/EEPROM memory (used for firmware such as boot programs). Examples of volatile memory are primary memory (typically dynamic RAM, DRAM), and fast CPU cache memory (typically static RAM, SRAM, which is fast but energy-consuming and offer lower memory capacity per are a unit than DRAM). Non-volatile memory technologies in Si-based electronics date back to the 1990s. Flash memory is widely used in consumer electronic products such as cellphones and music players and NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. The rapid increase of leakage currents in Silicon CMOS transistors with scaling poses a big challenge for the integration of SRAM memories. There is also the case of susceptibility to read/write failure with low power schemes. As a result of this, over the past decade, there has been an extensive pooling of time, resources and effort towards developing emerging memory technologies like Resistive RAM (ReRAM/RRAM), STT-MRAM, Domain Wall Memory and Phase Change Memory(PRAM). Emerging non-volatile memory technologies promise new memories to store more data at less cost than the expensive-to build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. These new memory technologies combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the non-volatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. The research and information on these Non-Volatile Memory (NVM) technologies has matured over the last decade. These NVMs are now being explored thoroughly nowadays as viable replacements for conventional SRAM based memories even for the higher levels of the memory hierarchy. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional(3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years...En el campo de la informática, el término ‘memoria’ se refiere generalmente a dispositivos que son usados para almacenar información que posteriormente será usada en diversos dispositivos, desde computadoras personales (PC), móviles, dispositivos inteligentes, etc. La memoria principal del sistema se utiliza para almacenar los datos e instrucciones de los procesos que se encuentre en ejecución, por lo que se requiere que funcionen a alta velocidad (por ejemplo, DRAM). La memoria principal está implementada habitualmente mediante memorias semiconductoras direccionables, siendo DRAM y SRAM los principales exponentes. Por otro lado, la memoria auxiliar o secundaria proporciona almacenaje(para ficheros, por ejemplo); es más lenta pero ofrece una mayor capacidad. Ejemplos típicos de memoria secundaria son discos duros, memorias flash portables, CDs y DVDs. Debido a que estos dispositivos no necesitan estar conectados a la computadora de forma permanente, son muy utilizados para almacenar copias de seguridad. La memoria secundaria almacena una gran cantidad de datos aun coste menor por bit que la memoria principal, siendo habitualmente dos órdenes de magnitud más barata que la memoria primaria. Existen dos tipos de memorias de tipo semiconductor: volátiles y no volátiles. Ejemplos de memorias no volátiles son las memorias Flash (algunas veces usadas como memoria secundaria y otras veces como memoria principal) y memorias ROM/PROM/EPROM/EEPROM (usadas para firmware como programas de arranque). Ejemplos de memoria volátil son las memorias DRAM (RAM dinámica), actualmente la opción predominante a la hora de implementar la memoria principal, y las memorias SRAM (RAM estática) más rápida y costosa, utilizada para los diferentes niveles de cache. Las tecnologías de memorias no volátiles basadas en electrónica de silicio se remontan a la década de1990. Una variante de memoria de almacenaje por carga denominada como memoria Flash es mundialmente usada en productos electrónicos de consumo como telefonía móvil y reproductores de música mientras NAND Flash solid state disks(SSDs) están progresivamente desplazando a los dispositivos de disco duro como principal unidad de almacenamiento en computadoras portátiles, de escritorio e incluso en centros de datos. En la actualidad, hay varios factores que amenazan la actual predominancia de memorias semiconductoras basadas en cargas (capacitivas). Por un lado, se está alcanzando el límite de integración de las memorias Flash, lo que compromete su escalado en el medio plazo. Por otra parte, el fuerte incremento de las corrientes de fuga de los transistores de silicio CMOS actuales, supone un enorme desafío para la integración de memorias SRAM. Asimismo, estas memorias son cada vez más susceptibles a fallos de lectura/escritura en diseños de bajo consumo. Como resultado de estos problemas, que se agravan con cada nueva generación tecnológica, en los últimos años se han intensificado los esfuerzos para desarrollar nuevas tecnologías que reemplacen o al menos complementen a las actuales. Los transistores de efecto campo eléctrico ferroso (FeFET en sus siglas en inglés) se consideran una de las alternativas más prometedores para sustituir tanto a Flash (por su mayor densidad) como a DRAM (por su mayor velocidad), pero aún está en una fase muy inicial de su desarrollo. Hay otras tecnologías algo más maduras, en el ámbito de las memorias RAM resistivas, entre las que cabe destacar ReRAM (o RRAM), STT-RAM, Domain Wall Memory y Phase Change Memory (PRAM)...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu
Recommended from our members
MANAGING AND LEVERAGING VARIATIONS AND NOISE IN NANOMETER CMOS
Advanced CMOS technologies have enabled high density designs at the cost of complex fabrication process. Variation in oxide thickness and Random Dopant Fluctuation (RDF) lead to variation in transistor threshold voltage Vth. Current photo-lithography process used for printing decreasing critical dimensions result in variation in transistor channel length and width. A related challenge in nanometer CMOS is that of on-chip random noise. With decreasing threshold voltage and operating voltage; and increasing operating temperature, CMOS devices are more sensitive to random on-chip noise in advanced technologies.
In this thesis, we explore novel circuit techniques to manage the impact of process variation in nanometer CMOS technologies. We also analyze the impact of on-chip noise on CMOS circuits and propose techniques to leverage or manage impact of noise based on the application. True Random Number Generator (TRNG) is an interesting cryptographic primitive that leverages on-chip noise to generate random bits; however, it is highly sensitive to process variation. We explore novel metastability circuits to alleviate the impact of variations and at the same time leverage on-chip noise sources like Random Thermal Noise and Random Telegraph Noise (RTN) to generate high quality random bits. We develop stochastic models for metastability based TRNG circuits to analyze the impact of variation and noise. The stochastic models are used to analyze and compare low power, energy efficient and lightweight post-processing techniques targeted to low power applications like System on Chip (SoC) and RFID. We also propose variation aware circuit calibration techniques to increase reliability. We extended this technique to a more generic application of designing Post-Si Tunable (PST) clock buffers to increase parametric yield in the presence of process variation. Apart from one time variation due to fabrication process, transistors undergo constant change in threshold voltage due to aging/wear-out effects and RTN. Process variation affects conventional sensors and introduces inaccuracies during measurement. We present a lightweight wear-out sensor that is tolerant to process variation and provides a fine grained wear-out sensing. A similar circuit is designed to sense fluctuation in transistor threshold voltage due to RTN. Although thermal noise and RTN are leveraged in applications like TRNG, they affect the stability of sensitive circuits like Static Random Access Memory (SRAM). We analyze the impact of on-chip noise on Bit Error Rate (BER) and post-Si test coverage of SRAM cells
- …