73 research outputs found

    SRAM Cells for Embedded Systems

    Get PDF

    Ultra-Low Power Circuit Design for Cubic-Millimeter Wireless Sensor Platform.

    Full text link
    Modern daily life is surrounded by smaller and smaller computing devices. As Bell’s Law predicts, the research community is now looking at tiny computing platforms and mm3-scale sensor systems are drawing an increasing amount of attention since they can create a whole new computing environment. Designing mm3-scale sensor nodes raises various circuit and system level challenges and we have addressed and proposed novel solutions for many of these challenges to create the first complete 1.0mm3 sensor system including a commercial microprocessor. We demonstrate a 1.0mm3 form factor sensor whose modular die-stacked structure allows maximum volume utilization. Low power I2C communication enables inter-layer serial communication without losing compatibility to standard I2C communication protocol. A dual microprocessor enables concurrent computation for the sensor node control and measurement data processing. A multi-modal power management unit allowed energy harvesting from various harvesting sources. An optical communication scheme is provided for initial programming, synchronization and re-programming after recovery from battery discharge. Standby power reduction techniques are investigated and a super cut-off power gating scheme with an ultra-low power charge pump reduces the standby power of logic circuits by 2-19× and memory by 30%. Different approaches for designing low-power memory for mm3-scale sensor nodes are also presented in this work. A dual threshold voltage gain cell eDRAM design achieves the lowest eDRAM retention power and a 7T SRAM design based on hetero-junction tunneling transistors reduces the standby power of SRAM by 9-19× with only 15% area overhead. We have paid special attention to the timer for the mm3-scale sensor systems and propose a multi-stage gate-leakage-based timer to limit the standard deviation of the error in hourly measurement to 196ms and a temperature compensation scheme reduces temperature dependency to 31ppm/°C. These techniques for designing ultra-low power circuits for a mm3-scale sensor enable implementation of a 1.0mm3 sensor node, which can be used as a skeleton for future micro-sensor systems in variety of applications. These microsystems imply the continuation of the Bell’s Law, which also predicts the massive deployment of mm3-scale computing systems and emergence of even smaller and more powerful computing systems in the near future.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91438/1/sori_1.pd

    Low-Power Soft-Error-Robust Embedded SRAM

    Get PDF
    Soft errors are radiation-induced ionization events (induced by energetic particles like alpha particles, cosmic neutron, etc.) that cause transient errors in integrated circuits. The circuit can always recover from such errors as the underlying semiconductor material is not damaged and hence, they are called soft errors. In nanometer technologies, the reduced node capacitance and supply voltage coupled with high packing density and lack of masking mechanisms are primarily responsible for the increased susceptibility of SRAMs towards soft errors. Coupled with these are the process variations (effective length, width, and threshold voltage), which are prominent in scaled-down technologies. Typically, SRAM constitutes up to 90% of the die in microprocessors and SoCs (System-on-Chip). Hence, the soft errors in SRAMs pose a potential threat to the reliable operation of the system. In this work, a soft-error-robust eight-transistor SRAM cell (8T) is proposed to establish a balance between low power consumption and soft error robustness. Using metrics like access time, leakage power, and sensitivity to single event transients (SET), the proposed approach is evaluated. For the purpose of analysis and comparisons the results of 8T cell are compared with a standard 6T SRAM cell and the state-of-the-art soft-error-robust SRAM cells. Based on simulation results in a 65-nm commercial CMOS process, the 8T cell demonstrates higher immunity to SETs along with smaller area and comparable leakage power. A 32-kb array of 8T cells was fabricated in silicon. After functional verification of the test chip, a radiation test was conducted to evaluate the soft error robustness. As SRAM cells are scaled aggressively to increase the overall packing density, the smaller transistors exhibit higher degrees of process variation and mismatch, leading to larger offset voltages. For SRAM sense amplifiers, higher offset voltages lead to an increased likelihood of an incorrect decision. To address this issue, a sense amplifier capable of cancelling the input offset voltage is presented. The simulated and measured results in 180-nm technology show that the sense amplifier is capable of detecting a 4 mV differential input signal under dc and transient conditions. The proposed sense amplifier, when compared with a conventional sense amplifier, has a similar die area and a greatly reduced offset voltage. Additionally, a dual-input sense amplifier architecture is proposed with corroborating silicon results to show that it requires smaller differential input to evaluate correctly.1 yea

    Yield-Aware Leakage Power Reduction of On-Chip SRAMs

    Get PDF
    Leakage power dissipation of on-chip static random access memories (SRAMs) constitutes a significant fraction of the total chip power consumption in state-of-the-art microprocessors and system-on-chips (SoCs). Scaling the supply voltage of SRAMs during idle periods is a simple yet effective technique to reduce their leakage power consumption. However, supply voltage scaling also results in the degradation of the cells’ robustness, and thus reduces their capability to retain data reliably. This is particularly resulting in the failure of an increasing number of cells that are already weakened by excessive process parameters variations and/or manufacturing imperfections in nano-meter technologies. Thus, with technology scaling, it is becoming increasingly challenging to maintain the yield while attempting to reduce the leakage power of SRAMs. This research focuses on characterizing the yield-leakage tradeoffs and developing novel techniques for a yield-aware leakage power reduction of SRAMs. We first demonstrate that new fault behaviors emerge with the introduction of a low-leakage standby mode to SRAMs. In particular, it is shown that there are some types of defects in SRAM cells that start to cause failures only when the drowsy mode is activated. These defects are not sensitized in the active operating mode, and thus escape the traditional March tests. Fault models for these newly observed fault behaviors are developed and described in this thesis. Then, a new low-complexity test algorithm, called March RAD, is proposed that is capable of detecting all the drowsy faults as well as the simple traditional faults. Extreme process parameters variations can also result in SRAM cells with very weak data-retention capability. The probability of such cells may be very rare in small memory arrays, however, in large arrays, their probability is magnified by the huge number of bit-cells integrated on a single chip. Hence, it is critical also to account for such extremal events while attempting to scale the supply voltage of SRAMs. To estimate the statistics of such rare events within a reasonable computational time, we have employed concepts from extreme value theory (EVT). This has enabled us to accurately model the tail of the cell failure probability distribution versus the supply voltage. Analytical models are then developed to characterize the yield-leakage tradeoffs in large modern SRAMs. It is shown that even a moderate scaling of the supply voltage of large SRAMs can potentially result in significant yield losses, especially in processes with highly fluctuating parameters. Thus, we have investigated the application of fault-tolerance techniques for a more efficient leakage reduction of SRAMs. These techniques allow for a more aggressive voltage scaling by providing tolerance to the failures that might occur during the sleep mode. The results show that in a 45-nm technology, assuming 10% variation in transistors threshold voltage, repairing a 64KB memory using only 8 redundant rows or incorporating single error correcting codes (ECCs) allows for ~90% leakage reduction while incurring only ~1% yield loss. The combination of redundancy and ECC, however, allows to reach the practical limits of leakage reduction in the analyzed benchmark, i.e., ~95%. Applying an identical standby voltage to all dies, regardless of their specific process parameters variations, can result in too many cell failures in some dies with heavily skewed process parameters, so that they may no longer be salvageable by the employed fault-tolerance techniques. To compensate for the inter-die variations, we have proposed to tune the standby voltage of each individual die to its corresponding minimum level, after manufacturing. A test algorithm is presented that can be used to identify the minimum applicable standby voltage to each individual memory die. A possible implementation of the proposed tuning technique is also demonstrated. Simulation results in a 45-nm predictive technology show that tuning standby voltage of SRAMs can enhance data-retention yield by an additional 10%−50%, depending on the severity of the variations

    Improved Fault Tolerant SRAM Cell Design & Layout in 130nm Technology

    Get PDF
    Technology scaling of CMOS devices has made the integrated circuits vulnerable to single event radiation effects. Scaling of CMOS Static RAM (SRAM) has led to denser packing architectures by reducing the size and spacing of diffusion nodes. However, this trend has led to the increase in charge collection and sharing effects between devices during an ion strike, making the circuit even more vulnerable to a specific single event effect called the single event multiple-node upset (SEMU). In nanometer technologies, SEMU can easily disrupt the data stored in the memory and can be more hazardous than a single event single-node upset. During the last decade, most of the research efforts were mainly focused on improving the single event single-node upset tolerance of SRAM cells by using novel circuit techniques, but recent studies relating to angular radiation sensitivity has revealed the importance of SEMU and Multi Bit Upset (MBU) tolerance for SRAM cells. The research focuses on improving SEMU tolerance of CMOS SRAM cells by using novel circuit and layout level techniques. A novel SRAM cell circuit & layout technique is proposed to improve the SEMU tolerance of 6T SRAM cells with decreasing feature size, making it an ideal candidate for future technologies. The layout is based on strategically positioning diffusion nodes in such a way as to provide charge cancellation among nodes during SEMU radiation strikes, instead of charge build-up. The new design & layout technique can improve the SEMU tolerance levels by up to 20 times without sacrificing on area overhead and hence is suitable for high density SRAM designs in commercial applications. Finally, laser testing of SRAM based configuration memory of a Xilinx Virtex-5 FPGA is performed to analyze the behavior of SRAM based systems towards radiation strikes

    Power Management and SRAM for Energy-Autonomous and Low-Power Systems

    Full text link
    We demonstrate the two first-known, complete, self-powered millimeter-scale computer systems. These microsystems achieve zero-net-energy operation using solar energy harvesting and ultra-low-power circuits. A medical implant for monitoring intraocular pressure (IOP) is presented as part of a treatment for glaucoma. The 1.5mm3 IOP monitor is easily implantable because of its small size and measures IOP with 0.5mmHg accuracy. It wirelessly transmits data to an external wand while consuming 4.7nJ/bit. This provides rapid feedback about treatment efficacies to decrease physician response time and potentially prevent unnecessary vision loss. A nearly-perpetual temperature sensor is presented that processes data using a 2.1μW near-threshold ARM°R Cortex- M3TM μP that provides a widely-used and trusted programming platform. Energy harvesting and power management techniques for these two microsystems enable energy-autonomous operation. The IOP monitor harvests 80nW of solar power while consuming only 5.3nW, extending lifetime indefinitely. This allows the device to provide medical information for extended periods of time, giving doctors time to converge upon the best glaucoma treatment. The temperature sensor uses on-demand power delivery to improve low-load dc-dc voltage conversion efficiency by 4.75x. It also performs linear regulation to deliver power with low noise, improved load regulation, and tight line regulation. Low-power high-throughput SRAM techniques help millimeter-scale microsystems meet stringent power budgets. VDD scaling in memory decreases energy per access, but also decreases stability margins. These margins can be improved using sizing, VTH selection, and assist circuits, as well as new bitcell designs. Adaptive Crosshairs modulation of SRAM power supplies fixes 70% of parametric failures. Half-differential SRAM design improves stability, reducing VMIN by 72mV. The circuit techniques for energy autonomy presented in this dissertation enable millimeter-scale microsystems for medical implants, such as blood pressure and glucose sensors, as well as non-medical applications, such as supply chain and infrastructure monitoring. These pervasive sensors represent the continuation of Bell’s Law, which accurately traces the evolution of computers as they become smaller, more numerous, and more powerful. The development of millimeter-scale massively-deployed ubiquitous computers ensures the continued expansion and profitability of the semiconductor industry. NanoWatt circuit techniques will allow us to meet this next frontier in IC design.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/86387/1/grgkchen_1.pd

    SRAM Read-Assist Scheme for Low Power High Performance Applications

    Get PDF
    Semiconductor technology scaling resulted in a considerable reduction in the transistor cost and an astonishing enhancement in the performance of VLSI (very large scale integration) systems. These nanoscale technologies have facilitated integration of large SRAMs which are now very popular for both processors and system-on-chip (SOC) designs. The density of SRAM array had a quadratic increase with each generation of CMOS technology. However, these nanoscale technologies unveiled few significant challenges to the design of high performance and low power embedded memories. First, process variation has become more significant in these technologies which threaten reliability of sensing circuitry. In order to alleviate this problem, we need to have larger signal swings on the bitlines (BLs) which degrade speed as well as power dissipation. The second challenge is due to the variation in the cell current which will reduce the worst case cell current. Since this cell current is responsible for discharging BLs, this problem will translate to longer activation time for the wordlines (WLs). The longer the WL pulse width is, the more likely is the cell to be unstable. A long WL pulse width can also degrade noise margin. Furthermore, as a result of continuous increase in the size of SRAMs, the BL capacitance has increased significantly which will deteriorate speed as well as power dissipation. The aforementioned problems require additional techniques and treatment such as read-assist techniques to insure fast, low power and reliable read operation in nanoscaled SRAMs. In this research we address these concerns and propose a read-assist sense amplifier (SA) in 65nm CMOS technology that expedites the process of developing differential voltage to be sensed by sense amplifier while reducing voltage swing on the BLs which will result in increased sensing speed, lower power and shorter WL activation time. A complete comparison is made between the proposed scheme, conventional SA and a state of the art design which shows speed improvement and power reduction of 56.1% and 25.9%, respectively over the conventional scheme at the expense of negligible area overhead. Also, the proposed scheme enables us to reduce cell VDD for having the same sensing speed which results in considerable reduction in leakage power dissipation
    • …
    corecore