118 research outputs found

    Towards Energy-Efficient and Reliable Computing: From Highly-Scaled CMOS Devices to Resistive Memories

    Get PDF
    The continuous increase in transistor density based on Moore\u27s Law has led us to highly scaled Complementary Metal-Oxide Semiconductor (CMOS) technologies. These transistor-based process technologies offer improved density as well as a reduction in nominal supply voltage. An analysis regarding different aspects of 45nm and 15nm technologies, such as power consumption and cell area to compare these two technologies is proposed on an IEEE 754 Single Precision Floating-Point Unit implementation. Based on the results, using the 15nm technology offers 4-times less energy and 3-fold smaller footprint. New challenges also arise, such as relative proportion of leakage power in standby mode that can be addressed by post-CMOS technologies. Spin-Transfer Torque Random Access Memory (STT-MRAM) has been explored as a post-CMOS technology for embedded and data storage applications seeking non-volatility, near-zero standby energy, and high density. Towards attaining these objectives for practical implementations, various techniques to mitigate the specific reliability challenges associated with STT-MRAM elements are surveyed, classified, and assessed herein. Cost and suitability metrics assessed include the area of nanomagmetic and CMOS components per bit, access time and complexity, Sense Margin (SM), and energy or power consumption costs versus resiliency benefits. In an attempt to further improve the Process Variation (PV) immunity of the Sense Amplifiers (SAs), a new SA has been introduced called Adaptive Sense Amplifier (ASA). ASA can benefit from low Bit Error Rate (BER) and low Energy Delay Product (EDP) by combining the properties of two of the commonly used SAs, Pre-Charge Sense Amplifier (PCSA) and Separated Pre-Charge Sense Amplifier (SPCSA). ASA can operate in either PCSA or SPCSA mode based on the requirements of the circuit such as energy efficiency or reliability. Then, ASA is utilized to propose a novel approach to actually leverage the PV in Non-Volatile Memory (NVM) arrays using Self-Organized Sub-bank (SOS) design. SOS engages the preferred SA alternative based on the intrinsic as-built behavior of the resistive sensing timing margin to reduce the latency and power consumption while maintaining acceptable access time

    Enhancing SRAM Cell Circuitry through PDLPDC Optimization

    Get PDF
    This study focuses on improving static random-access memory (SRAM) cell circuit design by leveraging the Power Dissipation Low Power Dissipation Circuit (PDLPDC). The PDLPDC, a low-power dissipation circuit, has gained widespread use in designing cells for read operations, write operations, and idle modes, contributing to power optimisation in submicron or nano-range Very Large Scale Integration (VLSI) designs. While various SRAM cells, including 6T and 10T configurations, have been developed, they often exhibit higher power consumption. In contrast, our PDLPDC-based approach operates at lower power levels. With the increasing integration of portable devices into everyday life, power optimisation has emerged as a critical challenge in modern VLSI technology. Many contemporary gadgets and systems rely on very Large-scale Integration (VLSI) technology, where static random-access memory (SRAM) blocks occupy substantial chip space and represent a significant source of leakage power in current systems. However, a common practice, scaling the supply voltage of SRAM macros can lead to elevated power dissipation. This research addresses the challenge by efficiently scaling the supply voltage of SRAM macros, resulting in an overall reduction in power dissipation. The study introduces 6T and 10T SRAM circuits that minimise power dissipation during read and write operations while maintaining reasonable performance and stability. The impact of process parameter variations on various design metrics, including read and write power, leakage power, leakage current, and latency, becomes a critical consideration in SRAM cell design with increased integration scale. The proposed circuit, optimised for the minimum power-delay product during read, write, and idle modes, is compared with traditional SRAM cells (6T and 10T) and demonstrates superior performance, reliability, and power efficiency. This research contributes to advancing the understanding of SRAM circuit design, especially in the context of power optimisation and process variations

    Variation-Tolerant Non-Uniform 3D Cache Management in Memory Stacked Multi-Core Processors

    Get PDF
    Process variations in integrated circuits have significant impact on their performance, leakage and stability. This is particularly evident in large, regular and dense structures such as DRAMs. DRAMs are built using minimized transistors with presumably uniform speed in an organized array structure. Process variation can introduce latency disparity among different memory arrays. With the proliferation of 3D stacking technology, DRAMs become a favorable choice for stacking on top of a multi-core processor as a last level cache for large capacity, high bandwidth, and low power. Hence, variations in bank speed create a unique problem of non-uniform cache accesses in the 3D space.In this thesis, we investigate cache management techniques for tolerating process variation in a 3D DRAM stacked onto a multi-core processor. We modeled the process variation in a 4-layer DRAM memory to characterize the latency variations among different banks. As a result, the notion of fast and slow banks from the core's standpoint is no longer associated with their physical distances with the banks. They are determined by the different bank latencies due to process variation. We develop cache migration schemes that utilize fast banks while limiting the cost due to migration. Our experiments show that there is a great performance benefit in exploiting fast memory banks through migration. On average, a variation-aware management can improve the performance of a workload over the baseline (where the speed of the slowest bank is assumed for all banks) by 17.8%. We are also only 0.45% away in performance from an ideal memory where no PV is present

    Power reduction techniques for memory elements

    Get PDF
    High performance and computational capability in the current generation processors are made possible by small feature sizes and high device density. To maintain the current drive strength and control the dynamic power in these processors, simultaneous scaling down of supply and threshold voltages is performed. High device density and low threshold voltages result in an increase in the leakage current dissipation. Large on chip caches are integrated onto the current generation processors which are becoming a major contributor to total leakage power. In this work, a novel methodology is proposed to minimize the leakage power and dynamic power. The proposed static power reduction technique, GALEOR (GAted LEakage transistOR), introduces stacks by placing high threshold voltage transistors and consists of inherent control logic. The proposed dynamic power reduction technique, adaptive phase tag cache, achieves power savings through varying tag size for a design window. Testing and verification of the proposed techniques is performed on a two level cache system. Power delay squared product is used as a metric to measure the effectiveness of the proposed techniques. The GALEOR technique achieves 30% reduction when implemented on CMOS benchmark circuits and an overall leakage savings of 9% when implemented on the two level cache systems. The proposed dynamic power reduction technique achieves 10% savings when implemented on individual modules of the two level cache and an overall savings of 3% when implemented on the entire two level cache system

    Spin-Transfer-Torque (STT) Devices for On-chip Memory and Their Applications to Low-standby Power Systems

    Get PDF
    With the scaling of CMOS technology, the proportion of the leakage power to total power consumption increases. Leakage may account for almost half of total power consumption in high performance processors. In order to reduce the leakage power, there is an increasing interest in using nonvolatile storage devices for memory applications. Among various promising nonvolatile memory elements, spin-transfer torque magnetic RAM (STT-MRAM) is identified as one of the most attractive alternatives to conventional SRAM. However, several design challenges of STT-MRAM such as shared read and write current paths, single-ended sensing, and high dynamic power are major challenges to be overcome to make it suitable for on-chip memories. To mitigate such problems, we propose a domain wall coupling based spin-transfer torque (DWCSTT) device for on-chip caches. Our proposed DWCSTT bit-cell decouples the read and the write current paths by the electrically-insulating magnetic coupling layer so that we can separately optimize read operation without having an impact on write-ability. In addition, the complementary polarizer structure in the read path of the DWCSTT device allows DWCSTT to enable self-referenced differential sensing. DWCSTT bit-cells improve the write power consumption due to the low electrical resistance of the write current path. Furthermore, we also present three different bit-cell level design techniques of Spin-Orbit Torque MRAM (SOT-MRAM) for alleviating some of the inefficiencies of conventional magnetic memories while maintaining the advantages of spin-orbit torque (SOT) based novel switching mechanism such as low write current requirement and decoupled read and write current path. Our proposed SOT-MRAM with supporting dual read/write ports (1R/1W) can address the issue of high-write latency of STT-MRAM by simultaneous 1R/1W accesses. Second, we propose a new type of SOT-MRAM which uses only one access transistor along with a Schottky diode in order to mitigate the area-overhead caused by two access transistors in conventional SOT-MRAM. Finally, a new design technique of SOT-MRAM is presented to improve the integration density by utilizing a shared bit-line structure
    corecore