511 research outputs found

    초고용량 솔리드 스테이드 드라이브를 위한 신뢰성 향상 및 성능 최적화 기술

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 컴퓨터공학부, 2021.8. 김지홍.The development of ultra-large NAND flash storage devices (SSDs) is recently made possible by NAND flash memory semiconductor process scaling and multi-leveling techniques, and NAND package technology, which enables continuous increasing of storage capacity by mounting many NAND flash memory dies in an SSD. As the capacity of an SSD increases, the total cost of ownership of the storage system can be reduced very effectively, however due to limitations of ultra-large SSDs in reliability and performance, there exists some obstacles for ultra-large SSDs to be widely adopted. In order to take advantage of an ultra-large SSD, it is necessary to develop new techniques to improve these reliability and performance issues. In this dissertation, we propose various optimization techniques to solve the reliability and performance issues of ultra-large SSDs. In order to overcome the optimization limitations of the existing approaches, our techniques were designed based on various characteristic evaluation results of NAND flash devices and field failure characteristics analysis results of real SSDs. We first propose a low-stress erase technique for the purpose of reducing the characteristic deviation between wordlines (WLs) in a NAND flash block. By reducing the erase stress on weak WLs, it effectively slows down NAND degradation and improves NAND endurance. From the NAND evaluation results, the conditions that can most effectively guard the weak WLs are defined as the gerase mode. In addition, considering the user workload characteristics, we propose a technique to dynamically select the optimal gerase mode that can maximize the lifetime of the SSD. Secondly, we propose an integrated approach that maximizes the efficiency of copyback operations to improve performance while not compromising data reliability. Based on characterization using real 3D TLC flash chips, we propose a novel per-block error propagation model under consecutive copyback operations. Our model significantly increases the number of successive copybacks by exploiting the aging characteristics of NAND blocks. Furthermore, we devise a resource-efficient error management scheme that can handle successive copybacks where pages move around multiple blocks with different reliability. By utilizing proposed copyback operation for internal data movement, SSD performance can be effectively improved without any reliability issues. Finally, we propose a new recovery scheme, called reparo, for a RAID storage system with ultra-large SSDs. Unlike the existing RAID recovery schemes, reparo repairs a failed SSD at the NAND die granularity without replacing it with a new SSD, thus avoiding most of the inter-SSD data copies during a RAID recovery step. When a NAND die of an SSD fails, reparo exploits a multi-core processor of the SSD controller to identify failed LBAs from the failed NAND die and to recover data from the failed LBAs. Furthermore, reparo ensures no negative post-recovery impact on the performance and lifetime of the repaired SSD. In order to evaluate the effectiveness of the proposed techniques, we implemented them in a storage device prototype, an open NAND flash storage device development environment, and a real SSD environment. And their usefulness was verified using various benchmarks and I/O traces collected the from real-world applications. The experiment results show that the reliability and performance of the ultra-large SSD can be effectively improved through the proposed techniques.반도체 공정의 미세화, 다치화 기술에 의해서 지속적으로 용량이 증가하고 있는 단위 낸드 플래쉬 메모리와 하나의 낸드 플래쉬 기반 스토리지 시스템 내에 수 많은 낸드 플래쉬 메모리 다이를 실장할 수 있게하는 낸드 패키지 기술로 인해 하드디스크보다 훨씬 더 큰 초고용량의 낸드 플래쉬 저장장치의 개발을 가능하게 했다. 플래쉬 저장장치의 용량이 증가할 수록 스토리지 시스템의 총 소유비용을 줄이는데 매우 효과적인 장점을 가지고 있으나, 신뢰성 및 성능의 측면에서의 한계로 인해서 초고용량 낸드 플래쉬 저장장치가 널리 사용되는데 있어서 장애물로 작용하고 있다. 초고용량 저장장치의 장점을 활용하기 위해서는 이러한 신뢰성 및 성능을 개선하기 위한 새로운 기법의 개발이 필요하다. 본 논문에서는 초고용량 낸드기반 저장장치(SSD)의 문제점인 성능 및 신뢰성을 개선하기 위한 다양한 최적화 기술을 제안한다. 기존 기법들의 최적화 한계를 극복하기 위해서, 우리의 기술은 실제 낸드 플래쉬 소자에 대한 다양한 특성 평가 결과와 SSD의 현장 불량 특성 분석결과를 기반으로 고안되었다. 이를 통해서 낸드의 플래쉬 특성과 SSD, 그리고 호스트 시스템의 동작 특성을 고려한 성능 및 신뢰성을 향상시키는 최적화 방법론을 제시한다. 첫째로, 본 논문에서는 낸드 플래쉬 불록내의 페이지들간의 특성편차를 줄이기 위해서 동적인 소거 스트레스 경감 기법을 제안한다. 제안된 기법은 낸드 블록의 내구성을 늘리기 위해서 특성이 약한 페이지들에 대해서 더 적은 소거 스트레스가 인가할 수 있도록 낸드 평가 결과로 부터 소거 스트레스 경감 모델을 구축한다. 또한 사용자 워크로드 특성을 고려하여, 소거 스트레스 경감 기법의 효과가 최대화 될 수 있는 최적의 경감 수준을 동적으로 판단할 수 있도록 한다. 이를 통해서 낸드 블록을 열화시키는 주요 원인인 소거 동작을 효율적으로 제어함으로써 저장장치의 수명을 효과적으로 향상시킨다. 둘째로, 본 논문에서는 고용량 SSD에서의 내부 데이터 이동으로 인한 성능 저하문제를 개선하기 위해서 낸드 플래쉬의 제한된 카피백(copyback) 명령을 활용하는 적응형 기법인 rCPB을 제안한다. rCPB은 Copyback 명령의 효율성을 극대화 하면서도 데이터 신뢰성에 문제가 없도록 낸드의 블럭의 노화특성을 반영한 새로운 copyback 오류 전파 모델을 기반으로한다. 이에더해, 신뢰성이 다른 블럭간의 copyback 명령을 활용한 데이터 이동을 문제없이 관리하기 위해서 자원 효율적인 오류 관리 체계를 제안한다. 이를 통해서 신뢰성에 문제를 주지 않는 수준에서 copyback을 최대한 활용하여 내부 데이터 이동을 최적화 함으로써 SSD의 성능향상을 달성할 수 있다. 마지막으로, 본 논문에서는 초고용량 SSD에서 낸드 플래쉬의 다이(die) 불량으로 인한 레이드(redundant array of independent disks, RAID) 리빌드 오버헤드를 최소화 하기위한 새로운 RAID 복구 기법인 reparo를 제안한다. Reparo는 SSD에 대한 교체없이 SSD의 불량 die에 대해서만 복구를 수행함으로써 복구 오버헤드를 최소화한다. 불량이 발생한 die의 데이터만 선별적으로 복구함으로써 복구 과정의 리빌드 트래픽을 최소화하며, SSD 내부의 병렬구조를 활용하여 불량 die 복구 시간을 효과적으로 단축한다. 또한 die 불량으로 인한 물리적 공간감소의 부작용을 최소화 함으로써 복구 이후의 성능 저하 및 수명의 감소 문제가 없도록 한다. 본 논문에서 제안한 기법들은 저장장치 프로토타입 및 공개 낸드 플래쉬 저장장치 개발환경, 그리고 실장 SSD환경에 구현되었으며, 실제 응용 프로그램을 모사한 다양한 벤트마크 및 실제 I/O 트레이스들을 이용하여 그 유용성을 검증하였다. 실험 결과, 제안된 기법들을 통해서 초고용량 SSD의 신뢰성 및 성능을 효과적으로 개선할 수 있음을 확인하였다.I Introduction 1 1.1 Motivation 1 1.2 Dissertation Goals 3 1.3 Contributions 5 1.4 Dissertation Structure 8 II Background 11 2.1 Overview of 3D NAND Flash Memory 11 2.2 Reliability Management in NAND Flash Memory 14 2.3 UL SSD architecture 15 2.4 Related Work 17 2.4.1 NAND endurance optimization by utilizing page characteristics difference 17 2.4.2 Performance optimizations using copyback operation 18 2.4.3 Optimizations for RAID Rebuild 19 2.4.4 Reliability improvement using internal RAID 20 III GuardedErase: Extending SSD Lifetimes by Protecting Weak Wordlines 22 3.1 Reliability Characterization of a 3D NAND Flash Block 22 3.1.1 Large Reliability Variations Among WLs 22 3.1.2 Erase Stress on Flash Reliability 26 3.2 GuardedErase: Design Overview and its Endurance Model 28 3.2.1 Basic Idea 28 3.2.2 Per-WL Low-Stress Erase Mode 31 3.2.3 Per-Block Erase Modes 35 3.3 Design and Implementation of LongFTL 39 3.3.1 Overview 39 3.3.2 Weak WL Detector 40 3.3.3 WAF Monitor 42 3.3.4 GErase Mode Selector 43 3.4 Experimental Results 46 3.4.1 Experimental Settings 46 3.4.2 Lifetime Improvement 47 3.4.3 Performance Overhead 49 3.4.4 Effectiveness of Lowest Erase Relief Ratio 50 IV Improving SSD Performance Using Adaptive Restricted- Copyback Operations 52 4.1 Motivations 52 4.1.1 Data Migration in Modern SSD 52 4.1.2 Need for Block Aging-Aware Copyback 53 4.2 RCPB: Copyback with a Limit 55 4.2.1 Error-Propagation Characteristics 55 4.2.2 RCPB Operation Model 58 4.3 Design and Implementation of rcFTL 59 4.3.1 EPM module 60 4.3.2 Data Migration Mode Selection 64 4.4 Experimental Results 65 4.4.1 Experimental Setup 65 4.4.2 Evaluation Results 66 V Reparo: A Fast RAID Recovery Scheme for Ultra- Large SSDs 70 5.1 SSD Failures: Causes and Characteristics 70 5.1.1 SSD Failure Types 70 5.1.2 SSD Failure Characteristics 72 5.2 Impact of UL SSDs on RAID Reliability 74 5.3 RAID Recovery using Reparo 77 5.3.1 Overview of Reparo 77 5.4 Cooperative Die Recovery 82 5.4.1 Identifier: Parallel Search of Failed LBAs 82 5.4.2 Handler: Per-Core Space Utilization Adjustment 83 5.5 Identifier Acceleration Using P2L Mapping Information 89 5.5.1 Page-level P2L Entrustment to Neighboring Die 90 5.5.2 Block-level P2L Entrustment to Neighboring Die 92 5.5.3 Additional Considerations for P2L Entrustment 94 5.6 Experimental Results 95 5.6.1 Experimental Settings 95 5.6.2 Experimental Results 97 VI Conclusions 109 6.1 Summary 109 6.2 Future Work 111 6.2.1 Optimization with Accurate WAF Prediction 111 6.2.2 Maximizing Copyback Threshold 111 6.2.3 Pre-failure Detection 112박

    High-Density Solid-State Memory Devices and Technologies

    Get PDF
    This Special Issue aims to examine high-density solid-state memory devices and technologies from various standpoints in an attempt to foster their continuous success in the future. Considering that broadening of the range of applications will likely offer different types of solid-state memories their chance in the spotlight, the Special Issue is not focused on a specific storage solution but rather embraces all the most relevant solid-state memory devices and technologies currently on stage. Even the subjects dealt with in this Special Issue are widespread, ranging from process and design issues/innovations to the experimental and theoretical analysis of the operation and from the performance and reliability of memory devices and arrays to the exploitation of solid-state memories to pursue new computing paradigms

    Design and Code Optimization for Systems with Next-generation Racetrack Memories

    Get PDF
    With the rise of computationally expensive application domains such as machine learning, genomics, and fluids simulation, the quest for performance and energy-efficient computing has gained unprecedented momentum. The significant increase in computing and memory devices in modern systems has resulted in an unsustainable surge in energy consumption, a substantial portion of which is attributed to the memory system. The scaling of conventional memory technologies and their suitability for the next-generation system is also questionable. This has led to the emergence and rise of nonvolatile memory ( NVM ) technologies. Today, in different development stages, several NVM technologies are competing for their rapid access to the market. Racetrack memory ( RTM ) is one such nonvolatile memory technology that promises SRAM -comparable latency, reduced energy consumption, and unprecedented density compared to other technologies. However, racetrack memory ( RTM ) is sequential in nature, i.e., data in an RTM cell needs to be shifted to an access port before it can be accessed. These shift operations incur performance and energy penalties. An ideal RTM , requiring at most one shift per access, can easily outperform SRAM . However, in the worst-cast shifting scenario, RTM can be an order of magnitude slower than SRAM . This thesis presents an overview of the RTM device physics, its evolution, strengths and challenges, and its application in the memory subsystem. We develop tools that allow the programmability and modeling of RTM -based systems. For shifts minimization, we propose a set of techniques including optimal, near-optimal, and evolutionary algorithms for efficient scalar and instruction placement in RTMs . For array accesses, we explore schedule and layout transformations that eliminate the longer overhead shifts in RTMs . We present an automatic compilation framework that analyzes static control flow programs and transforms the loop traversal order and memory layout to maximize accesses to consecutive RTM locations and minimize shifts. We develop a simulation framework called RTSim that models various RTM parameters and enables accurate architectural level simulation. Finally, to demonstrate the RTM potential in non-Von-Neumann in-memory computing paradigms, we exploit its device attributes to implement logic and arithmetic operations. As a concrete use-case, we implement an entire hyperdimensional computing framework in RTM to accelerate the language recognition problem. Our evaluation shows considerable performance and energy improvements compared to conventional Von-Neumann models and state-of-the-art accelerators

    Evaluation of the colossal electroresistance (CER) effect and its application in the non-volatile Resistive Random Access Memory (RRAM)

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 79-81).Flash memory, the current leading technology for non-volatile memory (NVM), is projected by many to run obsolete in the face of future miniaturization trend in the semiconductor devices due to some of its technical limitations. Several different technologies have been developed in attempt for replacing Flash memory as the most dominant NVM technology; none of which seems to indicate significant success at the moment. Among these technologies is RRAM (Resistive Random Access Memory), a novel type of memory technology which has only recently emerged to join the race. The underlying principle of an RRAM device is based on the colossal electroresistance (CER) effect, i.e. the resistance switching behavior upon application of voltage of varying polarity and/or magnitude. This thesis aims to investigate the CER effect and how it can be designed to be a non-volatile memory as well as other novel application, e.g. memristor. The various technical aspects pertaining to this phenomenon, including the materials and the physical basis, are explored and analyzed. As a complementary to that, the market potential of the RRAM technology is also assessed. This includes the market study of memory industry, the current intellectual property (IP) landscape and some of the relevant business strategies. The production strategy (i.e. the production cost, initial investment, and pricing strategy) is then derived from the technical and market analysis evaluated earlier and with using some reasonable assumptions.by Aulia Tegar Wicaksono.M.Eng

    Bit-Flip Aware Data Structures for Phase Change Memory

    Get PDF
    Big, non-volatile, byte-addressable, low-cost, and fast non-volatile memories like Phase Change Memory are appearing in the marketplace. They have the capability to unify both memory and storage and allow us to rethink the present memory hierarchy. An important draw-back to Phase Change Memory is limited write-endurance. In addition, Phase Change Memory shares with other Non-Volatile Random Access Memories an asym- metry in the energy costs of writes and reads. Best use of Non-Volatile Random Access Memories limits the number of times a Non-Volatile Random Access Memory cell changes contents, called a bit-flip. While the future of main memory is still unknown, we should already start to create data structures for them in order to shape the future era. This thesis investigates the creation of bit-flip aware data structures.The thesis first considers general ways in which a data structure can save bit- flips by smart overwrites and by using the exclusive-or of pointers. It then shows how a simple content dependent encoding can reduce bit-flips for web corpora. It then shows how to build hash based dictionary structures for Linear Hashing and Spiral Storage. Finally, the thesis presents Gray counters, close to bit-flip optimal counters that even enable age- based wear leveling with counters managed by the Non-Volatile Random Access Memories themselves instead of by the Operating Systems

    Achieving Reliable and Sustainable Next-Generation Memories

    Get PDF
    Conventional memory technology scaling has introduced reliability challenges due to dysfunctional, improperly formed cells and crosstalk from increased cell proximity. Furthermore, as the manufacturing effort becomes increasingly complex due to these deeply scaled technologies, holistic sustainability is negatively impacted. The development of new memory technologies can help overcome the capacitor scaling limitations of DRAM. However, these technologies have their own reliability concerns, such as limited write endurance in the case of Phase Change Memories (PCM). Moreover, emerging system requirements, such as in-memory encryption to protect sensitive or private data and operation in harsh environments create additional challenges that must be addressed in the context of reliability and sustainability. This dissertation provides new multifactor and ultimately unified solutions to address many of these concerns in the same system. In particular, my contributions toward mitigating these issues are as follows. I present GreenChip and GreenAsic, which together provide the first tools to holistically evaluate new computer architecture, chip, and memory design concepts for sustainability. These tools provide detailed estimates of manufacturing and operational-phase metrics for different computing workloads and deployment scenarios. Using GreenChip, I examined existing DRAM reliability techniques in the context of their holistic sustainability impact, including my own technique to mitigate bitline crosstalk. For PCM, I provided a new reliability technique with no additional storage overhead that substantially increases the lifetime of an encrypted memory system. To provide bit-level error correction, I developed compact linked-list and Bloom-filter-based bit-level fault map structures, that provide unprecedented levels of error tabulation, combined with my own novel error correction and lifetime extension approaches based on these maps for less area than traditional ECC. In particular, FaME, can correct N faults using N bits when utilizing a bit-level fault map. For operation in harsh environments, I created a triple modular redundancy (TMR) pointer-based fault map, HOTH, which specifically protects cells shown to be weak to radiation. Finally, to combine the analyses of holistic sustainability and memory lifetime, I created the LARS technique, which adjusts the GreenChip indifference analysis to account for the additional sustainability benefit provided by increased reliability and lifetime

    Caractérisation et conception d' architectures basées sur des mémoires à changement de phase

    Get PDF
    Semiconductor memory has always been an indispensable component of modern electronic systems. The increasing demand for highly scaled memory devices has led to the development of reliable non-volatile memories that are used in computing systems for permanent data storage and are capable of achieving high data rates, with the same or lower power dissipation levels as those of current advanced memory solutions.Among the emerging non-volatile memory technologies, Phase Change Memory (PCM) is the most promising candidate to replace conventional Flash memory technology. PCM offers a wide variety of features, such as fast read and write access, excellent scalability potential, baseline CMOS compatibility and exceptional high-temperature data retention and endurance performances, and can therefore pave the way for applications not only in memory devices, but also in energy demanding, high-performance computer systems. However, some reliability issues still need to be addressed in order for PCM to establish itself as a competitive Flash memory replacement.This work focuses on the study of embedded Phase Change Memory in order to optimize device performance and propose solutions to overcome the key bottlenecks of the technology, targeting high-temperature applications. In order to enhance the reliability of the technology, the stoichiometry of the phase change material was appropriately engineered and dopants were added, resulting in an optimized thermal stability of the device. A decrease in the programming speed of the memory technology was also reported, along with a residual resistivity drift of the low resistance state towards higher resistance values over time.A novel programming technique was introduced, thanks to which the programming speed of the devices was improved and, at the same time, the resistance drift phenomenon could be successfully addressed. Moreover, an algorithm for programming PCM devices to multiple bits per cell using a single-pulse procedure was also presented. A pulse generator dedicated to provide the desired voltage pulses at its output was designed and experimentally tested, fitting the programming demands of a wide variety of materials under study and enabling accurate programming targeting the performance optimization of the technology.Les mémoires à base de semi-conducteur sont indispensables pour les dispositifs électroniques actuels. La demande croissante pour des dispositifs mémoires fortement miniaturisées a entraîné le développement de mémoires non volatiles fiables qui sont utilisées dans des systèmes informatiques pour le stockage de données et qui sont capables d'atteindre des débits de données élevés, avec des niveaux de dissipation d'énergie équivalents voire moindres que ceux des technologies mémoires actuelles.Parmi les technologies de mémoires non-volatiles émergentes, les mémoires à changement de phase (PCM) sont le candidat le plus prometteur pour remplacer la technologie de mémoire Flash conventionnelle. Les PCM offrent une grande variété de fonctions, comme une lecture et une écriture rapide, un excellent potentiel de miniaturisation, une compatibilité CMOS et des performances élevées de rétention de données à haute température et d'endurance, et peuvent donc ouvrir la voie à des applications non seulement pour les dispositifs mémoires, mais également pour les systèmes informatiques à hautes performances. Cependant, certains problèmes de fiabilité doivent encore être résolus pour que les PCM se positionnent comme un remplacement concurrentiel de la mémoire Flash.Ce travail se concentre sur l'étude de mémoires à changement de phase intégrées afin d'optimiser leurs performances et de proposer des solutions pour surmonter les principaux points critiques de la technologie, ciblant des applications à hautes températures. Afin d'améliorer la fiabilité de la technologie, la stœchiométrie du matériau à changement de phase a été conçue de façon appropriée et des dopants ont été ajoutés, optimisant ainsi la stabilité thermique. Une diminution de la vitesse de programmation est également rapportée, ainsi qu'un drift résiduel de la résistance de l'état de faiblement résistif vers des valeurs de résistance plus élevées au cours du temps.Une nouvelle technique de programmation est introduite, permettant d'améliorer la vitesse de programmation des dispositifs et, dans le même temps, de réduire avec succès le phénomène de drift en résistance. Par ailleurs, un algorithme de programmation des PCM multi-bits est présenté. Un générateur d'impulsions fournissant des impulsions avec la tension souhaitée en sortie a été conçu et testé expérimentalement, répondant aux demandes de programmation d'une grande variété de matériaux innovants et en permettant la programmation précise et l’optimisation des performances des PCM

    Predicting power scalability in a reconfigurable platform

    Get PDF
    This thesis focuses on the evolution of digital hardware systems. A reconfigurable platform is proposed and analysed based on thin-body, fully-depleted silicon-on-insulator Schottky-barrier transistors with metal gates and silicide source/drain (TBFDSBSOI). These offer the potential for simplified processing that will allow them to reach ultimate nanoscale gate dimensions. Technology CAD was used to show that the threshold voltage in TBFDSBSOI devices will be controllable by gate potentials that scale down with the channel dimensions while remaining within appropriate gate reliability limits. SPICE simulations determined that the magnitude of the threshold shift predicted by TCAD software would be sufficient to control the logic configuration of a simple, regular array of these TBFDSBSOI transistors as well as to constrain its overall subthreshold power growth. Using these devices, a reconfigurable platform is proposed based on a regular 6-input, 6-output NOR LUT block in which the logic and configuration functions of the array are mapped onto separate gates of the double-gate device. A new analytic model of the relationship between power (P), area (A) and performance (T) has been developed based on a simple VLSI complexity metric of the form ATσ = constant. As σ defines the performance “return” gained as a result of an increase in area, it also represents a bound on the architectural options available in power-scalable digital systems. This analytic model was used to determine that simple computing functions mapped to the reconfigurable platform will exhibit continuous power-area-performance scaling behavior. A number of simple arithmetic circuits were mapped to the array and their delay and subthreshold leakage analysed over a representative range of supply and threshold voltages, thus determining a worse-case range for the device/circuit-level parameters of the model. Finally, an architectural simulation was built in VHDL-AMS. The frequency scaling described by σ, combined with the device/circuit-level parameters predicts the overall power and performance scaling of parallel architectures mapped to the array
    corecore