19 research outputs found

    ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ์ €์žฅ์žฅ์น˜์˜ ์„ฑ๋Šฅ ๋ฐ ์ˆ˜๋ช… ํ–ฅ์ƒ์„ ์œ„ํ•œ ํ”„๋กœ๊ทธ๋žจ ์ปจํ…์ŠคํŠธ ๊ธฐ๋ฐ˜ ์ตœ์ ํ™” ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. ๊น€์ง€ํ™.์ปดํ“จํŒ… ์‹œ์Šคํ…œ์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ์„ ์œ„ํ•ด, ๊ธฐ์กด์˜ ๋Š๋ฆฐ ํ•˜๋“œ๋””์Šคํฌ(HDD)๋ฅผ ๋น ๋ฅธ ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ๊ธฐ๋ฐ˜ ์ €์žฅ์žฅ์น˜(SSD)๋กœ ๋Œ€์ฒดํ•˜๊ณ ์ž ํ•˜๋Š” ์—ฐ๊ตฌ๊ฐ€ ์ตœ๊ทผ ํ™œ๋ฐœํžˆ ์ง„ํ–‰ ๋˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ง€์†์ ์ธ ๋ฐ˜๋„์ฒด ๊ณต์ • ์Šค์ผ€์ผ๋ง ๋ฐ ๋ฉ€ํ‹ฐ ๋ ˆ๋ฒจ๋ง ๊ธฐ์ˆ ๋กœ SSD ๊ฐ€๊ฒฉ์„ ๋™๊ธ‰ HDD ์ˆ˜์ค€์œผ๋กœ ๋‚ฎ์•„์กŒ์ง€๋งŒ, ์ตœ๊ทผ์˜ ์ฒจ๋‹จ ๋””๋ฐ”์ด์Šค ๊ธฐ์ˆ ์˜ ๋ถ€์ž‘์šฉ์œผ ๋กœ NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์˜ ์ˆ˜๋ช…์ด ์งง์•„์ง€๋Š” ๊ฒƒ์€ ๊ณ ์„ฑ๋Šฅ ์ปดํ“จํŒ… ์‹œ์Šคํ…œ์—์„œ์˜ SSD์˜ ๊ด‘๋ฒ”์œ„ํ•œ ์ฑ„ํƒ์„ ๋ง‰๋Š” ์ฃผ์š” ์žฅ๋ฒฝ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ตœ๊ทผ์˜ ๊ณ ๋ฐ€๋„ ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์˜ ์ˆ˜๋ช… ๋ฐ ์„ฑ๋Šฅ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ์‹œ์Šคํ…œ ๋ ˆ๋ฒจ์˜ ๊ฐœ์„  ๊ธฐ์ˆ ์„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆ ๋œ ๊ธฐ๋ฒ•์€ ์‘์šฉ ํ”„๋กœ ๊ทธ๋žจ์˜ ์“ฐ๊ธฐ ๋ฌธ๋งฅ์„ ํ™œ์šฉํ•˜์—ฌ ๊ธฐ์กด์—๋Š” ์–ป์„ ์ˆ˜ ์—†์—ˆ๋˜ ๋ฐ์ดํ„ฐ ์ˆ˜๋ช… ํŒจํ„ด ๋ฐ ์ค‘๋ณต ๋ฐ์ดํ„ฐ ํŒจํ„ด์„ ๋ถ„์„ํ•˜์˜€๋‹ค. ์ด์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ, ๋‹จ์ผ ๊ณ„์ธต์˜ ๋‹จ์ˆœํ•œ ์ •๋ณด๋งŒ์„ ํ™œ์šฉํ–ˆ ๋˜ ๊ธฐ์กด ๊ธฐ๋ฒ•์˜ ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•จ์œผ๋กœ์จ ํšจ๊ณผ์ ์œผ๋กœ NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์˜ ์„ฑ๋Šฅ ๋ฐ ์ˆ˜๋ช…์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์‹œํ•œ๋‹ค. ๋จผ์ €, ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์˜ I/O ์ž‘์—…์—๋Š” ๋ฌธ๋งฅ์— ๋”ฐ๋ผ ๊ณ ์œ ํ•œ ๋ฐ์ดํ„ฐ ์ˆ˜๋ช…๊ณผ ์ค‘ ๋ณต ๋ฐ์ดํ„ฐ์˜ ํŒจํ„ด์ด ์กด์žฌํ•œ๋‹ค๋Š” ์ ์„ ๋ถ„์„์„ ํ†ตํ•ด ํ™•์ธํ•˜์˜€๋‹ค. ๋ฌธ๋งฅ ์ •๋ณด๋ฅผ ํšจ๊ณผ ์ ์œผ๋กœ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•ด ํ”„๋กœ๊ทธ๋žจ ์ปจํ…์ŠคํŠธ (์“ฐ๊ธฐ ๋ฌธ๋งฅ) ์ถ”์ถœ ๋ฐฉ๋ฒ•์„ ๊ตฌํ˜„ ํ•˜์˜€๋‹ค. ํ”„๋กœ๊ทธ๋žจ ์ปจํ…์ŠคํŠธ ์ •๋ณด๋ฅผ ํ†ตํ•ด ๊ฐ€๋น„์ง€ ์ปฌ๋ ‰์…˜ ๋ถ€ํ•˜์™€ ์ œํ•œ๋œ ์ˆ˜๋ช…์˜ NAND ํ”Œ ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ๊ฐœ์„ ์„ ์œ„ํ•œ ๊ธฐ์กด ๊ธฐ์ˆ ์˜ ํ•œ๊ณ„๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๊ทน๋ณตํ•  ์ˆ˜ ์žˆ๋‹ค. ๋‘˜์งธ, ๋ฉ€ํ‹ฐ ์ŠคํŠธ๋ฆผ SSD์—์„œ WAF๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ๋ฐ์ดํ„ฐ ์ˆ˜๋ช… ์˜ˆ์ธก์˜ ์ •ํ™• ์„ฑ์„ ๋†’์ด๋Š” ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์˜ I/O ์ปจํ…์ŠคํŠธ๋ฅผ ํ™œ์šฉ ํ•˜๋Š” ์‹œ์Šคํ…œ ์ˆ˜์ค€์˜ ์ ‘๊ทผ ๋ฐฉ์‹์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ๊ธฐ๋ฒ•์˜ ํ•ต์‹ฌ ๋™๊ธฐ๋Š” ๋ฐ์ดํ„ฐ ์ˆ˜๋ช…์ด LBA๋ณด๋‹ค ๋†’์€ ์ถ”์ƒํ™” ์ˆ˜์ค€์—์„œ ํ‰๊ฐ€ ๋˜์–ด์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ๋”ฐ๋ผ์„œ ํ”„ ๋กœ๊ทธ๋žจ ์ปจํ…์ŠคํŠธ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฐ์ดํ„ฐ์˜ ์ˆ˜๋ช…์„ ๋ณด๋‹ค ์ •ํ™•ํžˆ ์˜ˆ์ธกํ•จ์œผ๋กœ์จ, ๊ธฐ์กด ๊ธฐ๋ฒ•์—์„œ LBA๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฐ์ดํ„ฐ ์ˆ˜๋ช…์„ ๊ด€๋ฆฌํ•˜๋Š” ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•œ๋‹ค. ๊ฒฐ๋ก ์ ์œผ ๋กœ ๋”ฐ๋ผ์„œ ๊ฐ€๋น„์ง€ ์ปฌ๋ ‰์…˜์˜ ํšจ์œจ์„ ๋†’์ด๊ธฐ ์œ„ํ•ด ์ˆ˜๋ช…์ด ์งง์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜๋ช…์ด ๊ธด ๋ฐ์ดํ„ฐ์™€ ํšจ๊ณผ์ ์œผ๋กœ ๋ถ„๋ฆฌ ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์“ฐ๊ธฐ ํ”„๋กœ๊ทธ๋žจ ์ปจํ…์ŠคํŠธ์˜ ์ค‘๋ณต ๋ฐ์ดํ„ฐ ํŒจํ„ด ๋ถ„์„์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ถˆํ•„์š”ํ•œ ์ค‘๋ณต ์ œ๊ฑฐ ์ž‘์—…์„ ํ”ผํ•  ์ˆ˜์žˆ๋Š” ์„ ํƒ์  ์ค‘๋ณต ์ œ๊ฑฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ค‘๋ณต ๋ฐ ์ดํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜์ง€ ์•Š๋Š” ํ”„๋กœ๊ทธ๋žจ ์ปจํ…์ŠคํŠธ๊ฐ€ ์กด์žฌํ•จ์„ ๋ถ„์„์ ์œผ๋กœ ๋ณด์ด๊ณ  ์ด๋“ค์„ ์ œ์™ธํ•จ์œผ๋กœ์จ, ์ค‘๋ณต์ œ๊ฑฐ ๋™์ž‘์˜ ํšจ์œจ์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ์ค‘๋ณต ๋ฐ์ดํ„ฐ๊ฐ€ ๋ฐœ์ƒ ํ•˜๋Š” ํŒจํ„ด์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ๊ธฐ๋ก๋œ ๋ฐ์ดํ„ฐ๋ฅผ ๊ด€๋ฆฌํ•˜๋Š” ์ž๋ฃŒ๊ตฌ์กฐ ์œ ์ง€ ์ •์ฑ…์„ ์ƒˆ๋กญ๊ฒŒ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ถ”๊ฐ€์ ์œผ๋กœ, ์„œ๋ธŒ ํŽ˜์ด์ง€ ์ฒญํฌ๋ฅผ ๋„์ž…ํ•˜์—ฌ ์ค‘๋ณต ๋ฐ์ดํ„ฐ๋ฅผ ์ œ๊ฑฐ ํ•  ๊ฐ€๋Šฅ์„ฑ์„ ๋†’์ด๋Š” ์„ธ๋ถ„ํ™” ๋œ ์ค‘๋ณต ์ œ๊ฑฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆ ๋œ ๊ธฐ์ˆ ์˜ ํšจ๊ณผ๋ฅผ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ์‹ค์ œ ์‹œ์Šคํ…œ์—์„œ ์ˆ˜์ง‘ ๋œ I/O ํŠธ๋ ˆ์ด์Šค์— ๊ธฐ๋ฐ˜ํ•œ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ‰๊ฐ€ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์—๋ฎฌ๋ ˆ์ดํ„ฐ ๊ตฌํ˜„์„ ํ†ตํ•ด ์‹ค์ œ ์‘์šฉ์„ ๋™์ž‘ํ•˜๋ฉด์„œ ์ผ๋ จ์˜ ํ‰๊ฐ€๋ฅผ ์ˆ˜ํ–‰ํ–ˆ๋‹ค. ๋” ๋‚˜์•„๊ฐ€ ๋ฉ€ํ‹ฐ ์ŠคํŠธ๋ฆผ ๋””๋ฐ”์ด์Šค์˜ ๋‚ด๋ถ€ ํŽŒ์›จ์–ด๋ฅผ ์ˆ˜์ •ํ•˜์—ฌ ์‹ค์ œ์™€ ๊ฐ€์žฅ ๋น„์Šทํ•˜๊ฒŒ ์„ค์ •๋œ ํ™˜๊ฒฝ์—์„œ ์‹คํ—˜์„ ์ˆ˜ํ–‰ํ•˜ ์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด ์ œ์•ˆ๋œ ์‹œ์Šคํ…œ ์ˆ˜์ค€ ์ตœ์ ํ™” ๊ธฐ๋ฒ•์ด ์„ฑ๋Šฅ ๋ฐ ์ˆ˜๋ช… ๊ฐœ์„  ์ธก๋ฉด์—์„œ ๊ธฐ์กด ์ตœ์ ํ™” ๊ธฐ๋ฒ•๋ณด๋‹ค ๋” ํšจ๊ณผ์ ์ด์—ˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค. ํ–ฅํ›„ ์ œ์•ˆ๋œ ๊ธฐ ๋ฒ•๋“ค์ด ๋ณด๋‹ค ๋” ๋ฐœ์ „๋œ๋‹ค๋ฉด, ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์ดˆ๊ณ ์† ์ปดํ“จํŒ… ์‹œ์Šคํ…œ์˜ ์ฃผ ์ €์žฅ์žฅ์น˜๋กœ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š” ๋ฐ์— ๊ธ์ •์ ์ธ ๊ธฐ์—ฌ๋ฅผ ํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค.Replacing HDDs with NAND flash-based storage devices (SSDs) has been one of the major challenges in modern computing systems especially in regards to better performance and higher mobility. Although the continuous semiconductor process scaling and multi-leveling techniques lower the price of SSDs to the comparable level of HDDs, the decreasing lifetime of NAND flash memory, as a side effect of recent advanced device technologies, is emerging as one of the major barriers to the wide adoption of SSDs in highperformance computing systems. In this dissertation, system-level lifetime improvement techniques for recent high-density NAND flash memory are proposed. Unlike existing techniques, the proposed techniques resolve the problems of decreasing performance and lifetime of NAND flash memory by exploiting the I/O context of an application to analyze data lifetime patterns or duplicate data contents patterns. We first present that I/O activities of an application have distinct data lifetime and duplicate data patterns. In order to effectively utilize the context information, we implemented the program context extraction method. With the program context, we can overcome the limitations of existing techniques for improving the garbage collection overhead and limited lifetime of NAND flash memory. Second, we propose a system-level approach to reduce WAF that exploits the I/O context of an application to increase the data lifetime prediction for the multi-streamed SSDs. The key motivation behind the proposed technique was that data lifetimes should be estimated at a higher abstraction level than LBAs, so we employ a write program context as a stream management unit. Thus, it can effectively separate data with short lifetimes from data with long lifetimes to improve the efficiency of garbage collection. Lastly, we propose a selective deduplication that can avoid unnecessary deduplication work based on the duplicate data pattern analysis of write program context. With the help of selective deduplication, we also propose fine-grained deduplication which improves the likelihood of eliminating redundant data by introducing sub-page chunk. It also resolves technical difficulties caused by its finer granularity, i.e., increased memory requirement and read response time. In order to evaluate the effectiveness of the proposed techniques, we performed a series of evaluations using both a trace-driven simulator and emulator with I/O traces which were collected from various real-world systems. To understand the feasibility of the proposed techniques, we also implemented them in Linux kernel on top of our in-house flash storage prototype and then evaluated their effects on the lifetime while running real-world applications. Our experimental results show that system-level optimization techniques are more effective over existing optimization techniques.I. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 Garbage Collection Problem . . . . . . . . . . . . . 2 1.1.2 Limited Endurance Problem . . . . . . . . . . . . . 4 1.2 Dissertation Goals . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Dissertation Structure . . . . . . . . . . . . . . . . . . . . . 7 II. Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 NAND Flash Memory System Software . . . . . . . . . . . 9 2.2 NAND Flash-Based Storage Devices . . . . . . . . . . . . . 10 2.3 Multi-stream Interface . . . . . . . . . . . . . . . . . . . . 11 2.4 Inline Data Deduplication Technique . . . . . . . . . . . . . 12 2.5 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.5.1 Data Separation Techniques for Multi-streamed SSDs 13 2.5.2 Write Traffic Reduction Techniques . . . . . . . . . 15 2.5.3 Program Context based Optimization Techniques for Operating Systems . . . . . . . . 18 III. Program Context-based Analysis . . . . . . . . . . . . . . . . 21 3.1 Definition and Extraction of Program Context . . . . . . . . 21 3.2 Data Lifetime Patterns of I/O Activities . . . . . . . . . . . 24 3.3 Duplicate Data Patterns of I/O Activities . . . . . . . . . . . 26 IV. Fully Automatic Stream Management For Multi-Streamed SSDs Using Program Contexts . . 29 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2.1 No Automatic Stream Management for General I/O Workloads . . . . . . . . . 33 4.2.2 Limited Number of Supported Streams . . . . . . . 36 4.3 Automatic I/O Activity Management . . . . . . . . . . . . . 38 4.3.1 PC as a Unit of Lifetime Classification for General I/O Workloads . . . . . . . . . . . 39 4.4 Support for Large Number of Streams . . . . . . . . . . . . 41 4.4.1 PCs with Large Lifetime Variances . . . . . . . . . 42 4.4.2 Implementation of Internal Streams . . . . . . . . . 44 4.5 Design and Implementation of PCStream . . . . . . . . . . 46 4.5.1 PC Lifetime Management . . . . . . . . . . . . . . 46 4.5.2 Mapping PCs to SSD streams . . . . . . . . . . . . 49 4.5.3 Internal Stream Management . . . . . . . . . . . . . 50 4.5.4 PC Extraction for Indirect Writes . . . . . . . . . . 51 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . 53 4.6.1 Experimental Settings . . . . . . . . . . . . . . . . 53 4.6.2 Performance Evaluation . . . . . . . . . . . . . . . 55 4.6.3 WAF Comparison . . . . . . . . . . . . . . . . . . . 56 4.6.4 Per-stream Lifetime Distribution Analysis . . . . . . 57 4.6.5 Impact of Internal Streams . . . . . . . . . . . . . . 58 4.6.6 Impact of the PC Attribute Table . . . . . . . . . . . 60 V. Deduplication Technique using Program Contexts . . . . . . 62 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.2 Selective Deduplication using Program Contexts . . . . . . . 63 5.2.1 PCDedup: Improving SSD Deduplication Efficiency using Selective Hash Cache Management . . . . . . 63 5.2.2 2-level LRU Eviction Policy . . . . . . . . . . . . . 68 5.3 Exploiting Small Chunk Size . . . . . . . . . . . . . . . . . 70 5.3.1 Fine-Grained Deduplication . . . . . . . . . . . . . 70 5.3.2 Read Overhead Management . . . . . . . . . . . . . 76 5.3.3 Memory Overhead Management . . . . . . . . . . . 80 5.3.4 Experimental Results . . . . . . . . . . . . . . . . . 82 VI. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 6.1 Summary and Conclusions . . . . . . . . . . . . . . . . . . 88 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.2.1 Supporting applications that have unusal program contexts . . . . . . . . . . . . . 89 6.2.2 Optimizing read request based on the I/O context . . 90 6.2.3 Exploiting context information to improve fingerprint lookups . . . . .. . . . . . 91 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Docto

    Study On Endurance Of Flash Memory Ssds

    Get PDF
    Flash memory promises to revolutionize storage systems because of its massive performance gains, ruggedness, large decrease in power usage and physical space requirements, but it is not a direct replacement for magnetic hard disks. Flash memory possesses fundamentally different characteristics and in order to fully utilize the positive aspects of flash memory, we must engineer around its unique limitations. The primary limitations are lack of in-place updates, the asymmetry between the sizes of the write and erase operations, and the limited endurance of flash memory cells. This leads to the need for efficient methods for block cleaning, combating write amplification and performing wear leveling. These are fundamental attributes of flash memory and will always need to be understood and efficiently managed to produce an efficient and high performance storage system. Our goal in this work is to provide analysis and algorithms for efficiently managing data storage for endurance in flash memory. We present update codes, a class of floating codes, which encodes data updates as flash memory cell increments that results in reduced block erases and longer lifespan of flash memory, and provides a new algorithm for constructing optimal floating codes. We also analyze the theoretically possible limits of write amplification reduction and minimization by using offline workloads. We give an estimation of the minimal write amplification by a workload decomposition algorithm and find that write amplification can be pushed to zero with relatively low over-provisioning. Additionally, we give simple, efficient and practical algorithms that are effective in reducing write amplification and performing wear leveling. Finally, we present a quantitative model of wear levels in flash memory by constructing a difference equation that gives erase counts of a block with workload, wear leveling strategy and SSD configuration as parameters

    Tietuekimppujen indeksointi flash-muistilla

    Get PDF
    In database applications, bulk operations which affect multiple records at once are common. They are performed when operations on single records at a time are not efficient enough. They can occur in several ways, both by applications naturally having bulk operations (such as a sales database which updates daily) and by applications performing them routinely as part of some other operation. While bulk operations have been studied for decades, their use with flash memory has been studied less. Flash memory, an increasingly popular alternative/complement to magnetic hard disks, has far better seek times, low power consumption and other desirable characteristics for database applications. However, erasing data is a costly operation, which means that designing index structures specifically for flash disks is useful. This thesis will investigate flash memory on data structures in general, identifying some common design traits, and incorporate those traits into a novel index structure, the bulk index. The bulk index is an index structure for bulk operations on flash memory, and was experimentally compared to a flash-based index structure that has shown impressive results, the Lazy Adaptive Tree (LA-tree for short). The bulk insertion experiments were made with varying-sized elementary bulks, i.e. maximal sets of inserted keys that fall between two consecutive keys in the existing data. The bulk index consistently performed better than the LA-tree, and especially well on bulk insertion experiments with many very small or a few very large elementary bulks, or with large inserted bulks. It was more than 4 times as fast at best. On range searches, it performed up to 50 % faster than the LA-tree, performing better on large ranges. Range deletions were also shown to be constant-time on the bulk index.Tietokantasovelluksissa kimppuoperaatiot jotka vaikuttavat useampaan alkioon kerralla ovat yleisiรค, ja niitรค kรคytetรครคn tehostamaan tietokannan toimintaa. Niitรค voi kรคyttรครค kun data lisรคtรครคn tietokantaan suuressa erรคssรค (esimerkiksi myyntidata jota pรคivitetรครคn kerran pรคivรคssรค)tai osana muita tietokantaoperaatioita. Kimppuoperaatioita on tutkittu jo vuosikymmeniรค, mutta niiden kรคyttรถรค flash-muistilla on tutkittu vรคhemmรคn. Flash-muisti on yleistyvรค muistiteknologiajota kรคytetรครคn magneettisten kiintolevyjen sijaan tai niiden rinnalla. Sen tietokannoille hyรถdyllisiin ominaisuuksiin kuuluvat mm. nopeat hakuajat ja alhainen sรคhkรถnkulutus. Kuitenkin datan poisto levyltรค on tyรถlรคs operaatio flash-levyillรค, mistรค johtuen tietorakenteet kannattaa suunnitella erikseen flash-levyille. Tรคmรค tyรถ tutkii flashin kรคyttรถรค tietorakenteissa ja koostaa niistรค flashille soveltuvia suunnitteluperiaatteita. Nรคitรค periaatteita edustaa myรถs tyรถssรค esitetty uusi rakenne, kimppuhakemisto (bulk index). Kimppuhakemisto on tietorakenne kimppuoperaatioille flash-muistilla, ja sitรค verrataan kokeellisesti LA-puuhun (Lazy Adaptive Tree, suom. laiska adaptiivinen puu), joka on suoriutunut hyvin kokeissa flash-muistilla. Kokeissa kรคytettiin vaihtelevan kokoisia alkeiskimppuja, eli maksimaalisia joukkoja lisรคtyssรค datassa jotka sijoittuvat kahden olemassaolevan avaimen vรคliin. Kimppuhakemisto oli nopeampi kuin LA-puu, ja erityisen paljon nopeampi kimppulisรคyksissรค pienellรค mรครคrรคllรค hyvin suuria tai suurella mรครคrรคllรค hyvin pieniรค alkeiskimppuja, tai suurilla kimppulisรคyksillรค. Parhaimmillaan se oli yli neljรค kertaa nopeampi. Vรคlihauissa se oli jopa 50 % nopeampi kuin LA-puu, ja parempi suurten vรคlien kanssa. Vรคlipoistot nรคytettiin vakioaikaisiksi kimppuhakemistossa

    Flash memory management with cooperation, adaptation and assistance

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Reliability of SSD Storage Systems

    Get PDF
    Solid-state drives (SSDs) are attractive storage components due to their many attractive properties, however, concerns about their reliability still remain and this delays the wider deployment of the SSDs. Many protection schemes have been proposed to improve the reliability of SSDs. For example, some techniques like error correction codes (ECC), log-like writing of ash translation layer (FTL), garbage collection and wear leveling improve the reliability of SSD at the device level. Composing an array of SSDs and employing system level parity protection is one of the popular protection schemes at the system level. Enterprise class (high-end) SSDs are faster and more resilient than client class (low-end) SSDs but they are expensive to be deployed in large scale storage systems. It is an attractive and practical alternative to exploit the high-end SSDs as a cache and low-end SSDs as main storage. The high-end SSD cache equipped on a low-end SSD array enhances both latency and reduces write count of the SSD storage system at the same time. This work analyzes the effectiveness of protection schemes originally designed for HDDs but applied to SSD storage systems. We find that different characteristics of HDDs and SSDs make integration of those solutions in SSD storage systems not so straight-forward. This work, at first, analyzes the effectiveness of the device level protection schemes such as ECC and scrubbing. A Markov model based analysis of the protection schemes is presented. Our model considers time varying nature of the reliability of ash memory as well as write amplification of various device level protection schemes. Our study shows that write amplification from these various sources can significantly affect the benefits of protection schemes in improving the lifetime. Based on the results from our analysis, we propose that bit errors within an SSD page be left uncorrected until a threshold of errors are accumulated. We show that such an approach can significantly improve lifetimes by up to 40%. This work also analyzes the effectiveness of parity protection over SSD arrays, a widely used protection scheme for SSD arrays at system level. The parity protection is typically employed to compose reliable storage systems. However, careful consideration is required when SSD based systems employ parity protection. Additional writes are required for parity updates. Also, parity consumes space on the device, which results in write amplification from less efficient garbage collection at higher space utilization. We present a Markov model to estimate the lifetime of SSD based RAID systems in different environments. In a small array, our results show that parity protection provides benefit only with considerably low space utilizations and low data access rates. However, in a large system, RAID improves data lifetime even when we take write amplification into account. This work explores how to optimize a mixed SSD array in terms of performance and lifetime. We show that simple integration of different classes of SSDs in traditional caching policies results in poor reliability. We also reveal that caching policies with static workload classifiers are not always efficient. We propose a sampling based adaptive approach that achieves fair workload distribution across the cache and the storage. The proposed algorithm enables fine-grained control of the workload distribution which minimizes latency over lifetime of mixed SSD arrays. We show that our adaptive algorithm is very effective in improving the latency over lifetime metric, on an average, by up to 2.36 times over LRU, across a number of workloads

    TACKLING PERFORMANCE AND SECURITY ISSUES FOR CLOUD STORAGE SYSTEMS

    Get PDF
    Building data-intensive applications and emerging computing paradigm (e.g., Machine Learning (ML), Artificial Intelligence (AI), Internet of Things (IoT) in cloud computing environments is becoming a norm, given the many advantages in scalability, reliability, security and performance. However, under rapid changes in applications, system middleware and underlying storage device, service providers are facing new challenges to deliver performance and security isolation in the context of shared resources among multiple tenants. The gap between the decades-old storage abstraction and modern storage device keeps widening, calling for software/hardware co-designs to approach more effective performance and security protocols. This dissertation rethinks the storage subsystem from device-level to system-level and proposes new designs at different levels to tackle performance and security issues for cloud storage systems. In the first part, we present an event-based SSD (Solid State Drive) simulator that models modern protocols, firmware and storage backend in detail. The proposed simulator can capture the nuances of SSD internal states under various I/O workloads, which help researchers understand the impact of various SSD designs and workload characteristics on end-to-end performance. In the second part, we study the security challenges of shared in-storage computing infrastructures. Many cloud providers offer isolation at multiple levels to secure data and instance, however, security measures in emerging in-storage computing infrastructures are not studied. We first investigate the attacks that could be conducted by offloaded in-storage programs in a multi-tenancy cloud environment. To defend against these attacks, we build a lightweight Trusted Execution Environment, IceClave to enable security isolation between in-storage programs and internal flash management functions. We show that while enforcing security isolation in the SSD controller with minimal hardware cost, IceClave still keeps the performance benefit of in-storage computing by delivering up to 2.4x better performance than the conventional host-based trusted computing approach. In the third part, we investigate the performance interference problem caused by other tenants' I/O flows. We demonstrate that I/O resource sharing can often lead to performance degradation and instability. The block device abstraction fails to expose SSD parallelism and pass application requirements. To this end, we propose a software/hardware co-design to enforce performance isolation by bridging the semantic gap. Our design can significantly improve QoS (Quality of Service) by reducing throughput penalties and tail latency spikes. Lastly, we explore more effective I/O control to address contention in the storage software stack. We illustrate that the state-of-the-art resource control mechanism, Linux cgroups is insufficient for controlling I/O resources. Inappropriate cgroup configurations may even hurt the performance of co-located workloads under memory intensive scenarios. We add kernel support for limiting page cache usage per cgroup and achieving I/O proportionality

    Data-intensive Systems on Modern Hardware : Leveraging Near-Data Processing to Counter the Growth of Data

    Get PDF
    Over the last decades, a tremendous change toward using information technology in almost every daily routine of our lives can be perceived in our society, entailing an incredible growth of data collected day-by-day on Web, IoT, and AI applications. At the same time, magneto-mechanical HDDs are being replaced by semiconductor storage such as SSDs, equipped with modern Non-Volatile Memories, like Flash, which yield significantly faster access latencies and higher levels of parallelism. Likewise, the execution speed of processing units increased considerably as nowadays server architectures comprise up to multiple hundreds of independently working CPU cores along with a variety of specialized computing co-processors such as GPUs or FPGAs. However, the burden of moving the continuously growing data to the best fitting processing unit is inherently linked to todayโ€™s computer architecture that is based on the data-to-code paradigm. In the light of Amdahl's Law, this leads to the conclusion that even with today's powerful processing units, the speedup of systems is limited since the fraction of parallel work is largely I/O-bound. Therefore, throughout this cumulative dissertation, we investigate the paradigm shift toward code-to-data, formally known as Near-Data Processing (NDP), which relieves the contention on the I/O bus by offloading processing to intelligent computational storage devices, where the data is originally located. Firstly, we identified Native Storage Management as the essential foundation for NDP due to its direct control of physical storage management within the database. Upon this, the interface is extended to propagate address mapping information and to invoke NDP functionality on the storage device. As the former can become very large, we introduce Physical Page Pointers as one novel NDP abstraction for self-contained immutable database objects. Secondly, the on-device navigation and interpretation of data are elaborated. Therefore, we introduce cross-layer Parsers and Accessors as another NDP abstraction that can be executed on the heterogeneous processing capabilities of modern computational storage devices. Thereby, the compute placement and resource configuration per NDP request is identified as a major performance criteria. Our experimental evaluation shows an improvement in the execution durations of 1.4x to 2.7x compared to traditional systems. Moreover, we propose a framework for the automatic generation of Parsers and Accessors on FPGAs to ease their application in NDP. Thirdly, we investigate the interplay of NDP and modern workload characteristics like HTAP. Therefore, we present different offloading models and focus on an intervention-free execution. By propagating the Shared State with the latest modifications of the database to the computational storage device, it is able to process data with transactional guarantees. Thus, we achieve to extend the design space of HTAP with NDP by providing a solution that optimizes for performance isolation, data freshness, and the reduction of data transfers. In contrast to traditional systems, we experience no significant drop in performance when an OLAP query is invoked but a steady and 30% faster throughput. Lastly, in-situ result-set management and consumption as well as NDP pipelines are proposed to achieve flexibility in processing data on heterogeneous hardware. As those produce final and intermediary results, we continue investigating their management and identified that an on-device materialization comes at a low cost but enables novel consumption modes and reuse semantics. Thereby, we achieve significant performance improvements of up to 400x by reusing once materialized results multiple times

    Shufflecake: Plausible Deniability for Multiple Hidden Filesystems on Linux

    Full text link
    We present Shufflecake, a new plausible deniability design to hide the existence of encrypted data on a storage medium making it very difficult for an adversary to prove the existence of such data. Shufflecake can be considered a ``spiritual successor'' of tools such as TrueCrypt and VeraCrypt, but vastly improved: it works natively on Linux, it supports any filesystem of choice, and can manage multiple volumes per device, so to make deniability of the existence of hidden partitions really plausible. Compared to ORAM-based solutions, Shufflecake is extremely fast and simpler but does not offer native protection against multi-snapshot adversaries. However, we discuss security extensions that are made possible by its architecture, and we show evidence why these extensions might be enough to thwart more powerful adversaries. We implemented Shufflecake as an in-kernel tool for Linux, adding useful features, and we benchmarked its performance showing only a minor slowdown compared to a base encrypted system. We believe Shufflecake represents a useful tool for people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes.Comment: A 15-page abstract of this work appears (with the same title) in the proceedings of the ACM Conference on Computer and Communications Security (CCS) 2023. This is the authors' full version. This revision date: 2023-12-07. This document supersedes any previous version
    corecore