115 research outputs found

    ์ดˆ๊ณ ์šฉ๋Ÿ‰ ์†”๋ฆฌ๋“œ ์Šคํ…Œ์ด๋“œ ๋“œ๋ผ์ด๋ธŒ๋ฅผ ์œ„ํ•œ ์‹ ๋ขฐ์„ฑ ํ–ฅ์ƒ ๋ฐ ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ธฐ์ˆ 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021.8. ๊น€์ง€ํ™.The development of ultra-large NAND flash storage devices (SSDs) is recently made possible by NAND flash memory semiconductor process scaling and multi-leveling techniques, and NAND package technology, which enables continuous increasing of storage capacity by mounting many NAND flash memory dies in an SSD. As the capacity of an SSD increases, the total cost of ownership of the storage system can be reduced very effectively, however due to limitations of ultra-large SSDs in reliability and performance, there exists some obstacles for ultra-large SSDs to be widely adopted. In order to take advantage of an ultra-large SSD, it is necessary to develop new techniques to improve these reliability and performance issues. In this dissertation, we propose various optimization techniques to solve the reliability and performance issues of ultra-large SSDs. In order to overcome the optimization limitations of the existing approaches, our techniques were designed based on various characteristic evaluation results of NAND flash devices and field failure characteristics analysis results of real SSDs. We first propose a low-stress erase technique for the purpose of reducing the characteristic deviation between wordlines (WLs) in a NAND flash block. By reducing the erase stress on weak WLs, it effectively slows down NAND degradation and improves NAND endurance. From the NAND evaluation results, the conditions that can most effectively guard the weak WLs are defined as the gerase mode. In addition, considering the user workload characteristics, we propose a technique to dynamically select the optimal gerase mode that can maximize the lifetime of the SSD. Secondly, we propose an integrated approach that maximizes the efficiency of copyback operations to improve performance while not compromising data reliability. Based on characterization using real 3D TLC flash chips, we propose a novel per-block error propagation model under consecutive copyback operations. Our model significantly increases the number of successive copybacks by exploiting the aging characteristics of NAND blocks. Furthermore, we devise a resource-efficient error management scheme that can handle successive copybacks where pages move around multiple blocks with different reliability. By utilizing proposed copyback operation for internal data movement, SSD performance can be effectively improved without any reliability issues. Finally, we propose a new recovery scheme, called reparo, for a RAID storage system with ultra-large SSDs. Unlike the existing RAID recovery schemes, reparo repairs a failed SSD at the NAND die granularity without replacing it with a new SSD, thus avoiding most of the inter-SSD data copies during a RAID recovery step. When a NAND die of an SSD fails, reparo exploits a multi-core processor of the SSD controller to identify failed LBAs from the failed NAND die and to recover data from the failed LBAs. Furthermore, reparo ensures no negative post-recovery impact on the performance and lifetime of the repaired SSD. In order to evaluate the effectiveness of the proposed techniques, we implemented them in a storage device prototype, an open NAND flash storage device development environment, and a real SSD environment. And their usefulness was verified using various benchmarks and I/O traces collected the from real-world applications. The experiment results show that the reliability and performance of the ultra-large SSD can be effectively improved through the proposed techniques.๋ฐ˜๋„์ฒด ๊ณต์ •์˜ ๋ฏธ์„ธํ™”, ๋‹ค์น˜ํ™” ๊ธฐ์ˆ ์— ์˜ํ•ด์„œ ์ง€์†์ ์œผ๋กœ ์šฉ๋Ÿ‰์ด ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๋Š” ๋‹จ์œ„ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ๋ฉ”๋ชจ๋ฆฌ์™€ ํ•˜๋‚˜์˜ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ๊ธฐ๋ฐ˜ ์Šคํ† ๋ฆฌ์ง€ ์‹œ์Šคํ…œ ๋‚ด์— ์ˆ˜ ๋งŽ์€ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ๋ฉ”๋ชจ๋ฆฌ ๋‹ค์ด๋ฅผ ์‹ค์žฅํ•  ์ˆ˜ ์žˆ๊ฒŒํ•˜๋Š” ๋‚ธ๋“œ ํŒจํ‚ค์ง€ ๊ธฐ์ˆ ๋กœ ์ธํ•ด ํ•˜๋“œ๋””์Šคํฌ๋ณด๋‹ค ํ›จ์”ฌ ๋” ํฐ ์ดˆ๊ณ ์šฉ๋Ÿ‰์˜ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ์ €์žฅ์žฅ์น˜์˜ ๊ฐœ๋ฐœ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ–ˆ๋‹ค. ํ”Œ๋ž˜์‰ฌ ์ €์žฅ์žฅ์น˜์˜ ์šฉ๋Ÿ‰์ด ์ฆ๊ฐ€ํ•  ์ˆ˜๋ก ์Šคํ† ๋ฆฌ์ง€ ์‹œ์Šคํ…œ์˜ ์ด ์†Œ์œ ๋น„์šฉ์„ ์ค„์ด๋Š”๋ฐ ๋งค์šฐ ํšจ๊ณผ์ ์ธ ์žฅ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋‚˜, ์‹ ๋ขฐ์„ฑ ๋ฐ ์„ฑ๋Šฅ์˜ ์ธก๋ฉด์—์„œ์˜ ํ•œ๊ณ„๋กœ ์ธํ•ด์„œ ์ดˆ๊ณ ์šฉ๋Ÿ‰ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ์ €์žฅ์žฅ์น˜๊ฐ€ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š”๋ฐ ์žˆ์–ด์„œ ์žฅ์• ๋ฌผ๋กœ ์ž‘์šฉํ•˜๊ณ  ์žˆ๋‹ค. ์ดˆ๊ณ ์šฉ๋Ÿ‰ ์ €์žฅ์žฅ์น˜์˜ ์žฅ์ ์„ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ด๋Ÿฌํ•œ ์‹ ๋ขฐ์„ฑ ๋ฐ ์„ฑ๋Šฅ์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๊ธฐ๋ฒ•์˜ ๊ฐœ๋ฐœ์ด ํ•„์š”ํ•˜๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ดˆ๊ณ ์šฉ๋Ÿ‰ ๋‚ธ๋“œ๊ธฐ๋ฐ˜ ์ €์žฅ์žฅ์น˜(SSD)์˜ ๋ฌธ์ œ์ ์ธ ์„ฑ๋Šฅ ๋ฐ ์‹ ๋ขฐ์„ฑ์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ์ตœ์ ํ™” ๊ธฐ์ˆ ์„ ์ œ์•ˆํ•œ๋‹ค. ๊ธฐ์กด ๊ธฐ๋ฒ•๋“ค์˜ ์ตœ์ ํ™” ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด์„œ, ์šฐ๋ฆฌ์˜ ๊ธฐ์ˆ ์€ ์‹ค์ œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ์†Œ์ž์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ํŠน์„ฑ ํ‰๊ฐ€ ๊ฒฐ๊ณผ์™€ SSD์˜ ํ˜„์žฅ ๋ถˆ๋Ÿ‰ ํŠน์„ฑ ๋ถ„์„๊ฒฐ๊ณผ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ณ ์•ˆ๋˜์—ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด์„œ ๋‚ธ๋“œ์˜ ํ”Œ๋ž˜์‰ฌ ํŠน์„ฑ๊ณผ SSD, ๊ทธ๋ฆฌ๊ณ  ํ˜ธ์ŠคํŠธ ์‹œ์Šคํ…œ์˜ ๋™์ž‘ ํŠน์„ฑ์„ ๊ณ ๋ คํ•œ ์„ฑ๋Šฅ ๋ฐ ์‹ ๋ขฐ์„ฑ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์‹œํ•œ๋‹ค. ์ฒซ์งธ๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ๋ถˆ๋ก๋‚ด์˜ ํŽ˜์ด์ง€๋“ค๊ฐ„์˜ ํŠน์„ฑํŽธ์ฐจ๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด์„œ ๋™์ ์ธ ์†Œ๊ฑฐ ์ŠคํŠธ๋ ˆ์Šค ๊ฒฝ๊ฐ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆ๋œ ๊ธฐ๋ฒ•์€ ๋‚ธ๋“œ ๋ธ”๋ก์˜ ๋‚ด๊ตฌ์„ฑ์„ ๋Š˜๋ฆฌ๊ธฐ ์œ„ํ•ด์„œ ํŠน์„ฑ์ด ์•ฝํ•œ ํŽ˜์ด์ง€๋“ค์— ๋Œ€ํ•ด์„œ ๋” ์ ์€ ์†Œ๊ฑฐ ์ŠคํŠธ๋ ˆ์Šค๊ฐ€ ์ธ๊ฐ€ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋‚ธ๋“œ ํ‰๊ฐ€ ๊ฒฐ๊ณผ๋กœ ๋ถ€ํ„ฐ ์†Œ๊ฑฐ ์ŠคํŠธ๋ ˆ์Šค ๊ฒฝ๊ฐ ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•œ๋‹ค. ๋˜ํ•œ ์‚ฌ์šฉ์ž ์›Œํฌ๋กœ๋“œ ํŠน์„ฑ์„ ๊ณ ๋ คํ•˜์—ฌ, ์†Œ๊ฑฐ ์ŠคํŠธ๋ ˆ์Šค ๊ฒฝ๊ฐ ๊ธฐ๋ฒ•์˜ ํšจ๊ณผ๊ฐ€ ์ตœ๋Œ€ํ™” ๋  ์ˆ˜ ์žˆ๋Š” ์ตœ์ ์˜ ๊ฒฝ๊ฐ ์ˆ˜์ค€์„ ๋™์ ์œผ๋กœ ํŒ๋‹จํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด์„œ ๋‚ธ๋“œ ๋ธ”๋ก์„ ์—ดํ™”์‹œํ‚ค๋Š” ์ฃผ์š” ์›์ธ์ธ ์†Œ๊ฑฐ ๋™์ž‘์„ ํšจ์œจ์ ์œผ๋กœ ์ œ์–ดํ•จ์œผ๋กœ์จ ์ €์žฅ์žฅ์น˜์˜ ์ˆ˜๋ช…์„ ํšจ๊ณผ์ ์œผ๋กœ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค. ๋‘˜์งธ๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ณ ์šฉ๋Ÿ‰ SSD์—์„œ์˜ ๋‚ด๋ถ€ ๋ฐ์ดํ„ฐ ์ด๋™์œผ๋กœ ์ธํ•œ ์„ฑ๋Šฅ ์ €ํ•˜๋ฌธ์ œ๋ฅผ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ์˜ ์ œํ•œ๋œ ์นดํ”ผ๋ฐฑ(copyback) ๋ช…๋ น์„ ํ™œ์šฉํ•˜๋Š” ์ ์‘ํ˜• ๊ธฐ๋ฒ•์ธ rCPB์„ ์ œ์•ˆํ•œ๋‹ค. rCPB์€ Copyback ๋ช…๋ น์˜ ํšจ์œจ์„ฑ์„ ๊ทน๋Œ€ํ™” ํ•˜๋ฉด์„œ๋„ ๋ฐ์ดํ„ฐ ์‹ ๋ขฐ์„ฑ์— ๋ฌธ์ œ๊ฐ€ ์—†๋„๋ก ๋‚ธ๋“œ์˜ ๋ธ”๋Ÿญ์˜ ๋…ธํ™”ํŠน์„ฑ์„ ๋ฐ˜์˜ํ•œ ์ƒˆ๋กœ์šด copyback ์˜ค๋ฅ˜ ์ „ํŒŒ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœํ•œ๋‹ค. ์ด์—๋”ํ•ด, ์‹ ๋ขฐ์„ฑ์ด ๋‹ค๋ฅธ ๋ธ”๋Ÿญ๊ฐ„์˜ copyback ๋ช…๋ น์„ ํ™œ์šฉํ•œ ๋ฐ์ดํ„ฐ ์ด๋™์„ ๋ฌธ์ œ์—†์ด ๊ด€๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด์„œ ์ž์› ํšจ์œจ์ ์ธ ์˜ค๋ฅ˜ ๊ด€๋ฆฌ ์ฒด๊ณ„๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด์„œ ์‹ ๋ขฐ์„ฑ์— ๋ฌธ์ œ๋ฅผ ์ฃผ์ง€ ์•Š๋Š” ์ˆ˜์ค€์—์„œ copyback์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜์—ฌ ๋‚ด๋ถ€ ๋ฐ์ดํ„ฐ ์ด๋™์„ ์ตœ์ ํ™” ํ•จ์œผ๋กœ์จ SSD์˜ ์„ฑ๋Šฅํ–ฅ์ƒ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ดˆ๊ณ ์šฉ๋Ÿ‰ SSD์—์„œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ์˜ ๋‹ค์ด(die) ๋ถˆ๋Ÿ‰์œผ๋กœ ์ธํ•œ ๋ ˆ์ด๋“œ(redundant array of independent disks, RAID) ๋ฆฌ๋นŒ๋“œ ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ตœ์†Œํ™” ํ•˜๊ธฐ์œ„ํ•œ ์ƒˆ๋กœ์šด RAID ๋ณต๊ตฌ ๊ธฐ๋ฒ•์ธ reparo๋ฅผ ์ œ์•ˆํ•œ๋‹ค. Reparo๋Š” SSD์— ๋Œ€ํ•œ ๊ต์ฒด์—†์ด SSD์˜ ๋ถˆ๋Ÿ‰ die์— ๋Œ€ํ•ด์„œ๋งŒ ๋ณต๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•จ์œผ๋กœ์จ ๋ณต๊ตฌ ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ตœ์†Œํ™”ํ•œ๋‹ค. ๋ถˆ๋Ÿ‰์ด ๋ฐœ์ƒํ•œ die์˜ ๋ฐ์ดํ„ฐ๋งŒ ์„ ๋ณ„์ ์œผ๋กœ ๋ณต๊ตฌํ•จ์œผ๋กœ์จ ๋ณต๊ตฌ ๊ณผ์ •์˜ ๋ฆฌ๋นŒ๋“œ ํŠธ๋ž˜ํ”ฝ์„ ์ตœ์†Œํ™”ํ•˜๋ฉฐ, SSD ๋‚ด๋ถ€์˜ ๋ณ‘๋ ฌ๊ตฌ์กฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋ถˆ๋Ÿ‰ die ๋ณต๊ตฌ ์‹œ๊ฐ„์„ ํšจ๊ณผ์ ์œผ๋กœ ๋‹จ์ถ•ํ•œ๋‹ค. ๋˜ํ•œ die ๋ถˆ๋Ÿ‰์œผ๋กœ ์ธํ•œ ๋ฌผ๋ฆฌ์  ๊ณต๊ฐ„๊ฐ์†Œ์˜ ๋ถ€์ž‘์šฉ์„ ์ตœ์†Œํ™” ํ•จ์œผ๋กœ์จ ๋ณต๊ตฌ ์ดํ›„์˜ ์„ฑ๋Šฅ ์ €ํ•˜ ๋ฐ ์ˆ˜๋ช…์˜ ๊ฐ์†Œ ๋ฌธ์ œ๊ฐ€ ์—†๋„๋ก ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ๊ธฐ๋ฒ•๋“ค์€ ์ €์žฅ์žฅ์น˜ ํ”„๋กœํ† ํƒ€์ž… ๋ฐ ๊ณต๊ฐœ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ์ €์žฅ์žฅ์น˜ ๊ฐœ๋ฐœํ™˜๊ฒฝ, ๊ทธ๋ฆฌ๊ณ  ์‹ค์žฅ SSDํ™˜๊ฒฝ์— ๊ตฌํ˜„๋˜์—ˆ์œผ๋ฉฐ, ์‹ค์ œ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์„ ๋ชจ์‚ฌํ•œ ๋‹ค์–‘ํ•œ ๋ฒคํŠธ๋งˆํฌ ๋ฐ ์‹ค์ œ I/O ํŠธ๋ ˆ์ด์Šค๋“ค์„ ์ด์šฉํ•˜์—ฌ ๊ทธ ์œ ์šฉ์„ฑ์„ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ์ œ์•ˆ๋œ ๊ธฐ๋ฒ•๋“ค์„ ํ†ตํ•ด์„œ ์ดˆ๊ณ ์šฉ๋Ÿ‰ SSD์˜ ์‹ ๋ขฐ์„ฑ ๋ฐ ์„ฑ๋Šฅ์„ ํšจ๊ณผ์ ์œผ๋กœ ๊ฐœ์„ ํ•  ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค.I Introduction 1 1.1 Motivation 1 1.2 Dissertation Goals 3 1.3 Contributions 5 1.4 Dissertation Structure 8 II Background 11 2.1 Overview of 3D NAND Flash Memory 11 2.2 Reliability Management in NAND Flash Memory 14 2.3 UL SSD architecture 15 2.4 Related Work 17 2.4.1 NAND endurance optimization by utilizing page characteristics difference 17 2.4.2 Performance optimizations using copyback operation 18 2.4.3 Optimizations for RAID Rebuild 19 2.4.4 Reliability improvement using internal RAID 20 III GuardedErase: Extending SSD Lifetimes by Protecting Weak Wordlines 22 3.1 Reliability Characterization of a 3D NAND Flash Block 22 3.1.1 Large Reliability Variations Among WLs 22 3.1.2 Erase Stress on Flash Reliability 26 3.2 GuardedErase: Design Overview and its Endurance Model 28 3.2.1 Basic Idea 28 3.2.2 Per-WL Low-Stress Erase Mode 31 3.2.3 Per-Block Erase Modes 35 3.3 Design and Implementation of LongFTL 39 3.3.1 Overview 39 3.3.2 Weak WL Detector 40 3.3.3 WAF Monitor 42 3.3.4 GErase Mode Selector 43 3.4 Experimental Results 46 3.4.1 Experimental Settings 46 3.4.2 Lifetime Improvement 47 3.4.3 Performance Overhead 49 3.4.4 Effectiveness of Lowest Erase Relief Ratio 50 IV Improving SSD Performance Using Adaptive Restricted- Copyback Operations 52 4.1 Motivations 52 4.1.1 Data Migration in Modern SSD 52 4.1.2 Need for Block Aging-Aware Copyback 53 4.2 RCPB: Copyback with a Limit 55 4.2.1 Error-Propagation Characteristics 55 4.2.2 RCPB Operation Model 58 4.3 Design and Implementation of rcFTL 59 4.3.1 EPM module 60 4.3.2 Data Migration Mode Selection 64 4.4 Experimental Results 65 4.4.1 Experimental Setup 65 4.4.2 Evaluation Results 66 V Reparo: A Fast RAID Recovery Scheme for Ultra- Large SSDs 70 5.1 SSD Failures: Causes and Characteristics 70 5.1.1 SSD Failure Types 70 5.1.2 SSD Failure Characteristics 72 5.2 Impact of UL SSDs on RAID Reliability 74 5.3 RAID Recovery using Reparo 77 5.3.1 Overview of Reparo 77 5.4 Cooperative Die Recovery 82 5.4.1 Identifier: Parallel Search of Failed LBAs 82 5.4.2 Handler: Per-Core Space Utilization Adjustment 83 5.5 Identifier Acceleration Using P2L Mapping Information 89 5.5.1 Page-level P2L Entrustment to Neighboring Die 90 5.5.2 Block-level P2L Entrustment to Neighboring Die 92 5.5.3 Additional Considerations for P2L Entrustment 94 5.6 Experimental Results 95 5.6.1 Experimental Settings 95 5.6.2 Experimental Results 97 VI Conclusions 109 6.1 Summary 109 6.2 Future Work 111 6.2.1 Optimization with Accurate WAF Prediction 111 6.2.2 Maximizing Copyback Threshold 111 6.2.3 Pre-failure Detection 112๋ฐ•

    Extending Memory Capacity in Consumer Devices with Emerging Non-Volatile Memory: An Experimental Study

    Full text link
    The number and diversity of consumer devices are growing rapidly, alongside their target applications' memory consumption. Unfortunately, DRAM scalability is becoming a limiting factor to the available memory capacity in consumer devices. As a potential solution, manufacturers have introduced emerging non-volatile memories (NVMs) into the market, which can be used to increase the memory capacity of consumer devices by augmenting or replacing DRAM. Since entirely replacing DRAM with NVM in consumer devices imposes large system integration and design challenges, recent works propose extending the total main memory space available to applications by using NVM as swap space for DRAM. However, no prior work analyzes the implications of enabling a real NVM-based swap space in real consumer devices. In this work, we provide the first analysis of the impact of extending the main memory space of consumer devices using off-the-shelf NVMs. We extensively examine system performance and energy consumption when the NVM device is used as swap space for DRAM main memory to effectively extend the main memory capacity. For our analyses, we equip real web-based Chromebook computers with the Intel Optane SSD, which is a state-of-the-art low-latency NVM-based SSD device. We compare the performance and energy consumption of interactive workloads running on our Chromebook with NVM-based swap space, where the Intel Optane SSD capacity is used as swap space to extend main memory capacity, against two state-of-the-art systems: (i) a baseline system with double the amount of DRAM than the system with the NVM-based swap space; and (ii) a system where the Intel Optane SSD is naively replaced with a state-of-the-art (yet slower) off-the-shelf NAND-flash-based SSD, which we use as a swap space of equivalent size as the NVM-based swap space

    Bridging the Gap between Application and Solid-State-Drives

    Get PDF
    Data storage is one of the important and often critical parts of the computing system in terms of performance, cost, reliability, and energy. Numerous new memory technologies, such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor, have emerged recently. Many of them have already entered the production system. Traditional storage optimization and caching algorithms are far from optimal because storage I/Os do not show simple locality. To provide optimal storage we need accurate predictions of I/O behavior. However, the workloads are increasingly dynamic and diverse, making the long and short time I/O prediction challenge. Because of the evolution of the storage technologies and the increasing diversity of workloads, the storage software is becoming more and more complex. For example, Flash Translation Layer (FTL) is added for NAND-flash based Solid State Disks (NAND-SSDs). However, it introduces overhead such as address translation delay and garbage collection costs. There are many recent studies aim to address the overhead. Unfortunately, there is no one-size-fits-all solution due to the variety of workloads. Despite rapidly evolving in storage technologies, the increasing heterogeneity and diversity in machines and workloads coupled with the continued data explosion exacerbate the gap between computing and storage speeds. In this dissertation, we improve the data storage performance from both top-down and bottom-up approach. First, we will investigate exposing the storage level parallelism so that applications can avoid I/O contentions and workloads skew when scheduling the jobs. Second, we will study how architecture aware task scheduling can improve the performance of the application when PCM based NVRAM are equipped. Third, we will develop an I/O correlation aware flash translation layer for NAND-flash based Solid State Disks. Fourth, we will build a DRAM-based correlation aware FTL emulator and study the performance in various filesystems

    Letter from the Special Issue Editor

    Get PDF
    Editorial work for DEBULL on a special issue on data management on Storage Class Memory (SCM) technologies
    • โ€ฆ
    corecore