21 research outputs found

    Four-injector variability modeling of FinFET predictive technology models

    Get PDF
    The usual way of modeling variability using threshold voltage shift and drain current amplification is becoming inaccurate as new sources of variability appear in sub-22nm devices. In this work we apply the four-injector approach for variability modeling to the simulation of SRAMs with predictive technology models from 20nm down to 7nm nodes. We show that the SRAMs, designed following ITRS roadmap, present stability metrics higher by at least 20% compared to a classical variability modeling approach. Speed estimation is also pessimistic, whereas leakage is underestimated if sub-threshold slope and DIBL mismatch and their correlations with threshold voltage are not considered

    Design and Analysis of Robust Low Voltage Static Random Access Memories.

    Full text link
    Static Random Access Memory (SRAM) is an indispensable part of most modern VLSI designs and dominates silicon area in many applications. In scaled technologies, maintaining high SRAM yield becomes more challenging since they are particularly vulnerable to process variations due to 1) the minimum sized devices used in SRAM bitcells and 2) the large array sizes. At the same time, low power design is a key focus throughout the semiconductor industry. Since low voltage operation is one of the most effective ways to reduce power consumption due to its quadratic relationship to energy savings, lowering the minimum operating voltage (Vmin) of SRAM has gained significant interest. This thesis presents four different approaches to design and analyze robust low voltage SRAM: SRAM analysis method improvement, SRAM bitcell development, SRAM peripheral optimization, and advance device selection. We first describe a novel yield estimation method for bit-interleaved voltage-scaled 8-T SRAMs. Instead of the traditional trade-off between write and read, the trade-off between write and half select disturb is analyzed. In addition, this analysis proposes a method to find an appropriate Write Word-Line (WWL) pulse width to maximize yield. Second, low leakage 10-T SRAM with speed compensation scheme is proposed. During sleep mode of a sensor application, SRAM retaining data cannot be shut down so it is important to minimize leakage in SRAM. This work adopts several leakage reduction techniques while compensating performance. Third, adaptive write architecture for low voltage 8-T SRAMs is proposed. By adaptively modulating WWL width and voltage level, it is possible to achieve low power consumption while maintaining high yield without excessive performance degradation. Finally, low power circuit design based on heterojunction tunneling transistors (HETTs) is discussed. HETTs have a steep subthreshold swing beneficial for low voltage operation. Device modeling and design of logic and SRAM are proposed.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91569/1/daeyeonk_1.pd

    Using pMOS Pass-Gates to Boost SRAM Performance by Exploiting Strain Effects in Sub-20-nm FinFET Technologies

    Get PDF
    Strained fin is one of the techniques used to improve the devices as their size keeps reducing in new nanoscale nodes. In this paper, we use a predictive technology of 14 nm where pMOS mobility is significantly improved when those devices are built on top of long, uncut fins, while nMOS devices present the opposite behavior due to the combination of strains. We explore the possibility of boosting circuit performance in repetitive structures where long uncut fins can be exploited to increase fin strain impact. In particular, pMOS pass-gates are used in 6T complementary SRAM cells (CSRAM) with reinforced pull-ups. Those cells are simulated under process variability and compared to the regular SRAM. We show that when layout dependent effects are considered the CSRAM design provides 10% to 40% faster access time while keeping the same area, power, and stability than a regular 6T SRAM cell. The conclusions also apply to 8T SRAM cells. The CSRAM cell also presents increased reliability in technologies whose nMOS devices have more mismatch than pMOS transistors

    Ultra Low Power Digital Circuit Design for Wireless Sensor Network Applications

    Get PDF
    Ny forskning innenfor feltet trĂ„dlĂžse sensornettverk Ă„pner for nye og innovative produkter og lĂžsninger. Biomedisinske anvendelser er blant omrĂ„dene med stĂžrst potensial og det investeres i dag betydelige belĂžp for Ă„ bruke denne teknologien for Ă„ gjĂžre medisinsk diagnostikk mer effektiv samtidig som man Ă„pner for fjerndiagnostikk basert pĂ„ trĂ„dlĂžse sensornoder integrert i et ”helsenett”. MĂ„let er Ă„ forbedre tjenestekvalitet og redusere kostnader samtidig som brukerne skal oppleve forbedret livskvalitet som fĂžlge av Ăžkt trygghet og mulighet for Ă„ tilbringe mest mulig tid i eget hjem og unngĂ„ unĂždvendige sykehusbesĂžk og innleggelser. For Ă„ gjĂžre dette til en realitet er man avhengige av sensorelektronikk som bruker minst mulig energi slik at man oppnĂ„r tilstrekkelig batterilevetid selv med veldig smĂ„ batterier. I sin avhandling ” Ultra Low power Digital Circuit Design for Wireless Sensor Network Applications” har PhD-kandidat Farshad Moradi fokusert pĂ„ nye lĂžsninger innenfor konstruksjon av energigjerrig digital kretselektronikk. Avhandlingen presenterer nye lĂžsninger bĂ„de innenfor aritmetiske og kombinatoriske kretser, samtidig som den studerer nye statiske minneelementer (SRAM) og alternative minnearkitekturer. Den ser ogsĂ„ pĂ„ utfordringene som oppstĂ„r nĂ„r silisiumteknologien nedskaleres i takt med mikroprosessorutviklingen og foreslĂ„r lĂžsninger som bidrar til Ă„ gjĂžre kretslĂžsninger mer robuste og skalerbare i forhold til denne utviklingen. De viktigste konklusjonene av arbeidet er at man ved Ă„ introdusere nye konstruksjonsteknikker bĂ„de er i stand til Ă„ redusere energiforbruket samtidig som robusthet og teknologiskalerbarhet Ăžker. Forskningen har vĂŠrt utfĂžrt i samarbeid med Purdue University og vĂŠrt finansiert av Norges ForskningsrĂ„d gjennom FRINATprosjektet ”Micropower Sensor Interface in Nanometer CMOS Technology”

    Design of Variation-Tolerant Circuits for Nanometer CMOS Technology: Circuits and Architecture Co-Design

    Get PDF
    Aggressive scaling of CMOS technology in sub-90nm nodes has created huge challenges. Variations due to fundamental physical limits, such as random dopants fluctuation (RDF) and line edge roughness (LER) are increasing significantly with technology scaling. In addition, manufacturing tolerances in process technology are not scaling at the same pace as transistor's channel length due to process control limitations (e.g., sub-wavelength lithography). Therefore, within-die process variations worsen with successive technology generations. These variations have a strong impact on the maximum clock frequency and leakage power for any digital circuit, and can also result in functional yield losses in variation-sensitive digital circuits (such as SRAM). Moreover, in nanometer technologies, digital circuits show an increased sensitivity to process variations due to low-voltage operation requirements, which are aggravated by the strong demand for lower power consumption and cost while achieving higher performance and density. It is therefore not surprising that the International Technology Roadmap for Semiconductors (ITRS) lists variability as one of the most challenging obstacles for IC design in nanometer regime. To facilitate variation-tolerant design, we study the impact of random variations on the delay variability of a logic gate and derive simple and scalable statistical models to evaluate delay variations in the presence of within-die variations. This work provides new design insight and highlights the importance of accounting for the effect of input slew on delay variations, especially at lower supply voltages. The derived models are simple, scalable, bias dependent and only require the knowledge of easily measurable parameters. This makes them useful in early design exploration, circuit/architecture optimization as well as technology prediction (especially in low-power and low-voltage operation). The derived models are verified using Monte Carlo SPICE simulations using industrial 90nm technology. Random variations in nanometer technologies are considered one of the largest design considerations. This is especially true for SRAM, due to the large variations in bitcell characteristics. Typically, SRAM bitcells have the smallest device sizes on a chip. Therefore, they show the largest sensitivity to different sources of variations. With the drastic increase in memory densities, lower supply voltages and higher variations, statistical simulation methodologies become imperative to estimate memory yield and optimize performance and power. In this research, we present a methodology for statistical simulation of SRAM read access yield, which is tightly related to SRAM performance and power consumption. The proposed flow accounts for the impact of bitcell read current variation, sense amplifier offset distribution, timing window variation and leakage variation on functional yield. The methodology overcomes the pessimism existing in conventional worst-case design techniques that are used in SRAM design. The proposed statistical yield estimation methodology allows early yield prediction in the design cycle, which can be used to trade off performance and power requirements for SRAM. The methodology is verified using measured silicon yield data from a 1Mb memory fabricated in an industrial 45nm technology. Embedded SRAM dominates modern SoCs and there is a strong demand for SRAM with lower power consumption while achieving high performance and high density. However, in the presence of large process variations, SRAMs are expected to consume larger power to ensure correct read operation and meet yield targets. We propose a new architecture that significantly reduces array switching power for SRAM. The proposed architecture combines built-in self-test (BIST) and digitally controlled delay elements to reduce the wordline pulse width for memories while ensuring correct read operation; hence, reducing switching power. A new statistical simulation flow was developed to evaluate the power savings for the proposed architecture. Monte Carlo simulations using a 1Mb SRAM macro from an industrial 45nm technology was used to examine the power reduction achieved by the system. The proposed architecture can reduce the array switching power significantly and shows large power saving - especially as the chip level memory density increases. For a 48Mb memory density, a 27% reduction in array switching power can be achieved for a read access yield target of 95%. In addition, the proposed system can provide larger power saving as process variations increase, which makes it a very attractive solution for 45nm and below technologies. In addition to its impact on bitcell read current, the increase of local variations in nanometer technologies strongly affect SRAM cell stability. In this research, we propose a novel single supply voltage read assist technique to improve SRAM static noise margin (SNM). The proposed technique allows precharging different parts of the bitlines to VDD and GND and uses charge sharing to precisely control the bitline voltage, which improves the bitcell stability. In addition to improving SNM, the proposed technique also reduces memory access time. Moreover, it only requires one supply voltage, hence, eliminates the need of large area voltage shifters. The proposed technique has been implemented in the design of a 512kb memory fabricated in 45nm technology. Results show improvements in SNM and read operation window which confirms the effectiveness and robustness of this technique

    Soft-Error Resilience Framework For Reliable and Energy-Efficient CMOS Logic and Spintronic Memory Architectures

    Get PDF
    The revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error (SE) dependability issues. Consequently, soft-error mitigation techniques have become essential to improve systems\u27 reliability. Herein, first, we proposed optimized soft-error resilience designs to improve robustness of sub-micron computing systems. The proposed approaches were developed to deliver energy-efficiency and tolerate double/multiple errors simultaneously while incurring acceptable speed performance degradation compared to the prior work. Secondly, the impact of Process Variation (PV) at the Near-Threshold Voltage (NTV) region on redundancy-based SE-mitigation approaches for High-Performance Computing (HPC) systems was investigated to highlight the approach that can realize favorable attributes, such as reduced critical datapath delay variation and low speed degradation. Finally, recently, spin-based devices have been widely used to design Non-Volatile (NV) elements such as NV latches and flip-flops, which can be leveraged in normally-off computing architectures for Internet-of-Things (IoT) and energy-harvesting-powered applications. Thus, in the last portion of this dissertation, we design and evaluate for soft-error resilience NV-latching circuits that can achieve intriguing features, such as low energy consumption, high computing performance, and superior soft errors tolerance, i.e., concurrently able to tolerate Multiple Node Upset (MNU), to potentially become a mainstream solution for the aerospace and avionic nanoelectronics. Together, these objectives cooperate to increase energy-efficiency and soft errors mitigation resiliency of larger-scale emerging NV latching circuits within iso-energy constraints. In summary, addressing these reliability concerns is paramount to successful deployment of future reliable and energy-efficient CMOS logic and spintronic memory architectures with deeply-scaled devices operating at low-voltages

    Power Management and SRAM for Energy-Autonomous and Low-Power Systems

    Full text link
    We demonstrate the two first-known, complete, self-powered millimeter-scale computer systems. These microsystems achieve zero-net-energy operation using solar energy harvesting and ultra-low-power circuits. A medical implant for monitoring intraocular pressure (IOP) is presented as part of a treatment for glaucoma. The 1.5mm3 IOP monitor is easily implantable because of its small size and measures IOP with 0.5mmHg accuracy. It wirelessly transmits data to an external wand while consuming 4.7nJ/bit. This provides rapid feedback about treatment efficacies to decrease physician response time and potentially prevent unnecessary vision loss. A nearly-perpetual temperature sensor is presented that processes data using a 2.1ÎŒW near-threshold ARM°R Cortex- M3TM ÎŒP that provides a widely-used and trusted programming platform. Energy harvesting and power management techniques for these two microsystems enable energy-autonomous operation. The IOP monitor harvests 80nW of solar power while consuming only 5.3nW, extending lifetime indefinitely. This allows the device to provide medical information for extended periods of time, giving doctors time to converge upon the best glaucoma treatment. The temperature sensor uses on-demand power delivery to improve low-load dc-dc voltage conversion efficiency by 4.75x. It also performs linear regulation to deliver power with low noise, improved load regulation, and tight line regulation. Low-power high-throughput SRAM techniques help millimeter-scale microsystems meet stringent power budgets. VDD scaling in memory decreases energy per access, but also decreases stability margins. These margins can be improved using sizing, VTH selection, and assist circuits, as well as new bitcell designs. Adaptive Crosshairs modulation of SRAM power supplies fixes 70% of parametric failures. Half-differential SRAM design improves stability, reducing VMIN by 72mV. The circuit techniques for energy autonomy presented in this dissertation enable millimeter-scale microsystems for medical implants, such as blood pressure and glucose sensors, as well as non-medical applications, such as supply chain and infrastructure monitoring. These pervasive sensors represent the continuation of Bell’s Law, which accurately traces the evolution of computers as they become smaller, more numerous, and more powerful. The development of millimeter-scale massively-deployed ubiquitous computers ensures the continued expansion and profitability of the semiconductor industry. NanoWatt circuit techniques will allow us to meet this next frontier in IC design.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/86387/1/grgkchen_1.pd

    The Efficacy of Programming Energy Controlled Switching in Resistive Random Access Memory (RRAM)

    Get PDF
    Current state-of-the-art memory technologies such as FLASH, Static Random Access Memory (SRAM) and Dynamic RAM (DRAM) are based on charge storage. The semiconductor industry has relied on cell miniaturization to increase the performance and density of memory technology, while simultaneously decreasing the cost per bit. However, this approach is not sustainable because the charge-storage mechanism is reaching a fundamental scaling limit. Although stack engineering and 3D integration solutions can delay this limit, alternate strategies based on non-charge storage mechanisms for memory have been introduced and are being actively pursued. Resistive Random Access Memory (RRAM) has emerged as one of the leading candidates for future high density non-volatile memory. The superior scalability of RRAMs is based on the highly localized active switching region and filamentary conductive path. Coupled with its simple structure and compatibility with complementary metal oxide semiconductor (CMOS) processes; RRAM cells have demonstrated switching performance comparable to volatile memory technologies such as DRAMs and SRAMs. However, there are two serious barriers to RRAM commercialization. The first is the variability of the resistance state which is associated with the inherent randomness of the resistive switching mechanism. The second is the filamentary nature of the conductive path which makes it susceptible to noise. In this experimental thesis, a novel program-verify (P-V) technique was developed with the objective to specifically address the programming errors and to provide solutions to the most challenging issues associated with these intrinsic failures in current RRAM technology. The technique, called Compliance-free Ultra-short Smart Pulse Programming (CUSPP), utilizes sub-nanosecond pulses in a compliance-free setup to minimize the programming energy delivered per pulse. In order to demonstrate CUSPP, a custom-built picosecond pulse generator and feedback control circuit was designed. We achieved high (108 cycles) endurance with state verification for each cycle and established high-speed performance, such as 100 ps write/erase speed and 500 kHz cycling rate of HfO2-based RRAM cells. We also investigate switching failure and the short-term instability of the RRAM using CUSPP

    Study and development of innovative strategies for energy-efficient cross-layer design of digital VLSI systems based on Approximate Computing

    Get PDF
    The increasing demand on requirements for high performance and energy efficiency in modern digital systems has led to the research of new design approaches that are able to go beyond the established energy-performance tradeoff. Looking at scientific literature, the Approximate Computing paradigm has been particularly prolific. Many applications in the domain of signal processing, multimedia, computer vision, machine learning are known to be particularly resilient to errors occurring on their input data and during computation, producing outputs that, although degraded, are still largely acceptable from the point of view of quality. The Approximate Computing design paradigm leverages the characteristics of this group of applications to develop circuits, architectures, algorithms that, by relaxing design constraints, perform their computations in an approximate or inexact manner reducing energy consumption. This PhD research aims to explore the design of hardware/software architectures based on Approximate Computing techniques, filling the gap in literature regarding effective applicability and deriving a systematic methodology to characterize its benefits and tradeoffs. The main contributions of this work are: -the introduction of approximate memory management inside the Linux OS, allowing dynamic allocation and de-allocation of approximate memory at user level, as for normal exact memory; - the development of an emulation environment for platforms with approximate memory units, where faults are injected during the simulation based on models that reproduce the effects on memory cells of circuital and architectural techniques for approximate memories; -the implementation and analysis of the impact of approximate memory hardware on real applications: the H.264 video encoder, internally modified to allocate selected data buffers in approximate memory, and signal processing applications (digital filter) using approximate memory for input/output buffers and tap registers; -the development of a fully reconfigurable and combinatorial floating point unit, which can work with reduced precision formats
    corecore