16 research outputs found

    Towards Design and Analysis For High-Performance and Reliable SSDs

    Get PDF
    NAND Flash-based Solid State Disks have many attractive technical merits, such as low power consumption, light weight, shock resistance, sustainability of hotter operation regimes, and extraordinarily high performance for random read access, which makes SSDs immensely popular and be widely employed in different types of environments including portable devices, personal computers, large data centers, and distributed data systems. However, current SSDs still suffer from several critical inherent limitations, such as the inability of in-place-update, asymmetric read and write performance, slow garbage collection processes, limited endurance, and degraded write performance with the adoption of MLC and TLC techniques. To alleviate these limitations, we propose optimizations from both specific outside applications layer and SSDs\u27 internal layer. Since SSDs are good compromise between the performance and price, so SSDs are widely deployed as second layer caches sitting between DRAMs and hard disks to boost the system performance. Due to the special properties of SSDs such as the internal garbage collection processes and limited lifetime, traditional cache devices like DRAM and SRAM based optimizations might not work consistently for SSD-based cache. Therefore, for the outside applications layer, our work focus on integrating the special properties of SSDs into the optimizations of SSD caches. Moreover, our work also involves the alleviation of the increased Flash write latency and ECC complexity due to the adoption of MLC and TLC technologies by analyzing the real work workloads

    Flash Memory Devices

    Get PDF
    Flash memory devices have represented a breakthrough in storage since their inception in the mid-1980s, and innovation is still ongoing. The peculiarity of such technology is an inherent flexibility in terms of performance and integration density according to the architecture devised for integration. The NOR Flash technology is still the workhorse of many code storage applications in the embedded world, ranging from microcontrollers for automotive environment to IoT smart devices. Their usage is also forecasted to be fundamental in emerging AI edge scenario. On the contrary, when massive data storage is required, NAND Flash memories are necessary to have in a system. You can find NAND Flash in USB sticks, cards, but most of all in Solid-State Drives (SSDs). Since SSDs are extremely demanding in terms of storage capacity, they fueled a new wave of innovation, namely the 3D architecture. Today โ€œ3Dโ€ means that multiple layers of memory cells are manufactured within the same piece of silicon, easily reaching a terabit capacity. So far, Flash architectures have always been based on "floating gate," where the information is stored by injecting electrons in a piece of polysilicon surrounded by oxide. On the contrary, emerging concepts are based on "charge trap" cells. In summary, flash memory devices represent the largest landscape of storage devices, and we expect more advancements in the coming years. This will require a lot of innovation in process technology, materials, circuit design, flash management algorithms, Error Correction Code and, finally, system co-design for new applications such as AI and security enforcement

    A Flexible BCH decoder for Flash Memory Systems using Cascaded BCH codes

    Get PDF
    NAND ash memories are widely used in consumer electronics, such as tablets, personal computers, smartphones, and gaming systems. However, unlike other standard storage devices, these ash memories suffer from various random errors. In order to address these reliability issues, various error correction codes (ECC) are employed. Bose-Chaudhuri Hocquenghem (BCH) code is the most common ECC used to address the errors in modern ash memories. Because of the limitation of the realization of the BCH codes for more extensive error correction, the modern ash memory devices use Low-density parity-check (LDPC) codes for error correction scheme. The realization of the LDPC decoders have greater complexity than BCH decoders, so these ECC decoders are implemented within the ash memory device. This thesis analyzes the limitation imposed by the state of the art implementation of BCH decoders and proposes a cascaded BCH code to address these limitations. In order to support a variety of ash memory devices, there are three main challenges to be addressed for BCH decoders. First, the latency of the BCH decoders, in the case of no error scenario, should be less than 100us. Second, there should be flexibility in supporting different ECC block size; more precisely, the solution should be able to support 256, 512, 1024, and 2048 bytes of ECC block. Third, there should be flexibility in supporting different bit errors. A recent development with Graphical Processing Units (GPUs) has attracted many researchers to use GPUs for non-graphical implementation. These GPUs are used in many consumer electronics as part of the system on chip (SOC) configuration. In this thesis we studied the limitation imposed by different implementations (VLSI, GPU, and CPU) of BCH decoders, and we propose a cascaded BCH code implemented using a hybrid approach to overcome the limitations of the BCH codes. By splitting the implementation across VLSI and GPUs, we have shown in this thesis that this method can provide flexibility over the block size and the bit error to be corrected

    Shingled Magnetic Recording disks for Mass Storage Systems

    Get PDF
    Disk drives have seen a dramatic increase in storage density over the last five decades, but to continue the growth seems difficult if not impossible because of physical limitations. One way to increase storage density is using a shingled magnetic recording (SMR) disk. Shingled writing is a promising technique that trades off the inability to update in-place for narrower tracks and thus a much higher data density. It is particularly appealing as it can be adopted while utilizing essentially the same physical recording mechanisms currently in use. Because of its manner of writing, an SMR disk would be unable to update a written track without overwriting neighboring tracks, potentially requiring the rewrite of all the tracks to the end of a band where the end of a band is an area left unwritten to allow for a non-overlapped final track. Random reads are still possible on such devices, but the handling of writes becomes particularly critical. In this manuscript, we first look at a variety of potential workloads, drawn from real-world traces, and evaluate their impact on SMR disk models. Later, we evaluate the behavior of SMR disks when used in an array configuration or when faced with heavily interleaved workloads. Specifically, we demonstrate the dramatically different effects that different workloads can have upon the opposing approaches of remapping and restoring blocks, and how write-heavy workloads can (under the right conditions, and contrary to intuition) result in a performance advantage for an SMR disk

    Investigating Power Outage Effects on Reliability of Solid-State Drives

    Full text link
    Solid-State Drives (SSDs) are recently employed in enterprise servers and high-end storage systems in order to enhance performance of storage subsystem. Although employing high speed SSDs in the storage subsystems can significantly improve system performance, it comes with significant reliability threat for write operations upon power failures. In this paper, we present a comprehensive analysis investigating the impact of workload dependent parameters on the reliability of SSDs under power failure for variety of SSDs (from top manufacturers). To this end, we first develop a platform to perform two important features required for study: a) a realistic fault injection into the SSD in the computing systems and b) data loss detection mechanism on the SSD upon power failure. In the proposed physical fault injection platform, SSDs experience a real discharge phase of Power Supply Unit (PSU) that occurs during power failure in data centers which was neglected in previous studies. The impact of workload dependent parameters such as workload Working Set Size (WSS), request size, request type, access pattern, and sequence of accesses on the failure of SSDs is carefully studied in the presence of realistic power failures. Experimental results over thousands number of fault injections show that data loss occurs even after completion of the request (up to 700ms) where the failure rate is influenced by the type, size, access pattern, and sequence of IO accesses while other parameters such as workload WSS has no impact on the failure of SSDs.Comment: Design, Automation & Test in Europe Conference & Exhibition (DATE), 2018. IEEE, 201

    ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๊ธฐ๋ฐ˜ ์ €์žฅ์žฅ์น˜์˜ ์ˆ˜๋ช… ํ–ฅ์ƒ์„ ์œ„ํ•œ ๊ณ„์ธต ๊ต์ฐจ ์ตœ์ ํ™” ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 2. ๊น€์ง€ํ™.Replacing HDDs with NAND flash-based storage devices (SSDs) has been one of the major challenges in modern computing systems especially in regards to better performance and higher mobility. Although uninterrupted semiconductor process scaling and multi-leveling techniques lower the price of SSDs to the comparable level of HDDs, the decreasing lifetime of NAND flash memory, as a side effect of recent advanced device technologies, is emerging as one of the major barriers to the wide adoption of SSDs in high-performance computing systems. In this dissertation, we propose new cross-layer optimization techniques to extend the lifetime (in particular, endurance) of NAND flash memory. Our techniques are motivated by our key observation that erasing a NAND block with a lower voltage or at a slower speed can significantly improve NAND endurance. However, using a lower voltage in erase operations causes adverse side effects on other NAND characteristics such as write performance and retention capability. The main goal of the proposed techniques is to improve NAND endurance without affecting the other NAND requirements. We first present Dynamic Erase Voltage and Time Scaling (DeVTS), a unified framework to enable a system software to exploit the tradeoff relationship between the endurance and erase voltages/times of NAND flash memory. DeVTS includes erase voltage/time scaling and write capability tuning, each of which brings a different impact on the endurance, performance, and retention capabilities of NAND flash memory. Second, we propose a lifetime improvement technique which takes advantage of idle times between write requests when erasing a NAND block with a slower speed or when writing data to a NAND block erased with a lower voltage. We have implemented a DeVTS-enabled FTL, called dvsFTL, which optimally adjusts the erase voltage/time and write performance of NAND devices in an automatic fashion. Our experimental results show that dvsFTL can improve NAND endurance by 62%, on average, over DeVTS-unaware FTL with a negligible decrease in the overall write performance. Third, we suggest a comprehensive lifetime improvement technique which exploits variations of the retention requirements as well as the performance requirement of SSDs when writing data to a NAND block erased with a lower voltage. We have implemented dvsFTL+, an extended version of dvsFTL, which fully utilizes DeVTS by accurately predicting the write performance and retention requirements during run times. Our experimental results show that dvsFTL+ can further improve NAND endurance by more than 50% over dvsFTL while preserving all the NAND requirements. Lastly, we present a reliability management technique which prevents retention failure problems when aggressive retention-capability tuning techniques are employed in real environments. Our measurement results show that the proposed technique can recover corrupted data from retention failures up to 23 times faster over existing data recovery techniques. Furthermore, it can successfully recover severely retention-failed data, such as ones experienced 8 times longer retention times than the retention-time specification, that were not recoverable with the existing technique. Based on the evaluation studies for the developed lifetime improvement techniques, we verified that the cross-layer optimization approach has a significant impact on extending the lifetime of NAND flash-based storage devices. We expect that our proposed techniques can positively contribute to not only the wide adoption of NAND flash memory in datacenter environments but also the gradual acceleration of using flash as main memory.Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Dissertation Goals 3 1.3 Contributions 4 1.4 Dissertation Structure 5 Chapter 2 Background 7 2.1 Threshold Voltage Window of NAND Flash Memory 7 2.2 NAND Program Operation 10 2.3 Related Work 11 2.3.1 System-Level SSD Lifetime Improvement Techniques 12 2.3.2 Device-Level Endurance-Enhancing Technique 15 2.3.3 Cross-Layer Optimization Techniques Exploiting NAND Tradeoffs 17 Chapter 3 Dynamic Erase Voltage and Time Scaling 20 3.1 Erase Voltage and Time Scaling 22 3.1.1 Motivation 22 3.1.2 Erase Voltage Scaling 23 3.1.3 Erase Time Scaling 26 3.2 Write Capability Tuning 28 3.2.1 Write Performance Tuning 29 3.2.2 Retention Capability Tuning 30 3.2.3 Disturbance Resistance Tuning 33 3.3 NAND Endurance Model 34 Chapter 4 Lifetime Improvement Technique Using Write-Performance Tuning 39 4.1 Design and Implementation of dvsFTL 40 4.1.1 Overview 40 4.1.2 Write-Speed Mode Selection 41 4.1.3 Erase Voltage Mode Selection 44 4.1.4 Erase Speed Mode Selection 46 4.1.5 DeVTS-wPT Aware FTL Modules 47 4.2 Experimental Results 50 4.2.1 Experimental Settings 50 4.2.2 Workload Characteristics 53 4.2.3 Endurance Gain Analysis 54 4.2.4 Overall Write Throughput Analysis 56 4.2.5 Detailed Analysis 58 Chapter 5 Lifetime Improvement Technique Using Retention-Capability Tuning 60 5.1 Design and Implementation of dvsFTL+ 62 5.1.1 Overview 62 5.1.2 Retention Requirement Prediction 64 5.1.3 Maximization of Endurance Benefit 66 5.1.4 Minimization of Reclaim Overhead 68 5.2 Experimental Results 69 5.2.1 Experimental Settings 69 5.2.2 Workload Characteristics 70 5.2.3 Endurance Gain Analysis 72 5.2.4 NAND Requirements Analysis 73 5.2.5 Detailed Analysis of Retention-Time Predictor 76 5.2.6 Detailed Analysis of Endurance Gain 83 Chapter 6 Reliability Management Technique for NAND Flash Memory 87 6.1 Overview 89 6.2 Motivation 91 6.2.1 Limitations of the Existing Retention-Error Management Policy 91 6.2.2 Limitations of the Existing Retention-Failure Recovery Technique 92 6.3 Retention Error Recovery Technique 95 6.3.1 Charge Movement Model 95 6.3.2 A Selective Error-Correction Procedure 99 6.3.3 Implementation 100 6.4 Experimental Results 103 Chapter 7 Conclusions 108 7.1 Summary and Conclusions 108 7.2 Future Work 110 7.2.1 Lifetime Improvement Technique Exploiting The Other NAND Tradeoffs 110 7.2.2 Development of Extended Techniques for DRAM-Flash Hybrid Main Memory Systems 111 7.2.3 Development of Specialized SSDs 112 Bibliography 114 ์ดˆ ๋ก 122Docto

    Study On Endurance Of Flash Memory Ssds

    Get PDF
    Flash memory promises to revolutionize storage systems because of its massive performance gains, ruggedness, large decrease in power usage and physical space requirements, but it is not a direct replacement for magnetic hard disks. Flash memory possesses fundamentally different characteristics and in order to fully utilize the positive aspects of flash memory, we must engineer around its unique limitations. The primary limitations are lack of in-place updates, the asymmetry between the sizes of the write and erase operations, and the limited endurance of flash memory cells. This leads to the need for efficient methods for block cleaning, combating write amplification and performing wear leveling. These are fundamental attributes of flash memory and will always need to be understood and efficiently managed to produce an efficient and high performance storage system. Our goal in this work is to provide analysis and algorithms for efficiently managing data storage for endurance in flash memory. We present update codes, a class of floating codes, which encodes data updates as flash memory cell increments that results in reduced block erases and longer lifespan of flash memory, and provides a new algorithm for constructing optimal floating codes. We also analyze the theoretically possible limits of write amplification reduction and minimization by using offline workloads. We give an estimation of the minimal write amplification by a workload decomposition algorithm and find that write amplification can be pushed to zero with relatively low over-provisioning. Additionally, we give simple, efficient and practical algorithms that are effective in reducing write amplification and performing wear leveling. Finally, we present a quantitative model of wear levels in flash memory by constructing a difference equation that gives erase counts of a block with workload, wear leveling strategy and SSD configuration as parameters

    Exploiting solid state drive parallelism for real-time flash storage

    Full text link
    The increased volume of sensor data generated by emerging applications in areas such as autonomous vehicles requires new technologies for storage and retrieval. NAND flash memory has desirable characteristics for real-time information storage and retrieval, such as non-volatility, shock resistance, low power consumption and fast access time. However, NAND flash memory management suffers high tail latency during storage space reclamation. This is unacceptable in a real-time system, where missed deadlines can have potentially catastrophic consequences. Current methods to ensure timing guarantees in flash storage do not explicitly exploit the internal parallelism in Solid State Drives (SSDs). Modern SSDs are able to support massive amounts of parallelism, as evidenced by the shift from the Advanced Host Controller Interface (AHCI) to the Non-Volatile Memory Host Controller Interface (NVMe), a multi-queue interface. This thesis focuses on providing predictable, low-latency guarantees for read and write requests in NAND flash memory by exploiting the internal parallelism in SSDs. The first part of the thesis presents a partitioned flash design that dynamically assigns each parallel flash unit to perform either reads or writes. To access data from a flash unit that is busy servicing a write request or performing garbage collection, the device rebuilds the data using encoding. Consequently, reads are never blocked by writes or storage space reclamation. In this design, however, low read latency is achieved at the expense of write throughput. The second part of the thesis explores how to predictably improve performance by minimizing the garbage collection cost in flash storage. The root cause of this extra cost is due to the SSDโ€™s inability to accurately determine data lifetime and group together data that expires before space needs to be reclaimed. This is exacerbated by the narrow block I/O interface, which prevents optimizations from either the device or the application above. By sharing application-specific knowledge of data lifetime with the device, the SSD is able to efficiently lay out data such that garbage collection cost is minimized
    corecore