1,194 research outputs found

    Decoding billions of integers per second through vectorization

    Get PDF
    In many important applications -- such as search engines and relational database systems -- data is stored in the form of arrays of integers. Encoding and, most importantly, decoding of these arrays consumes considerable CPU time. Therefore, substantial effort has been made to reduce costs associated with compression and decompression. In particular, researchers have exploited the superscalar nature of modern processors and SIMD instructions. Nevertheless, we introduce a novel vectorized scheme called SIMD-BP128 that improves over previously proposed vectorized approaches. It is nearly twice as fast as the previously fastest schemes on desktop processors (varint-G8IU and PFOR). At the same time, SIMD-BP128 saves up to 2 bits per integer. For even better compression, we propose another new vectorized scheme (SIMD-FastPFOR) that has a compression ratio within 10% of a state-of-the-art scheme (Simple-8b) while being two times faster during decoding.Comment: For software, see https://github.com/lemire/FastPFor, For data, see http://boytsov.info/datasets/clueweb09gap

    Application of Decision Diagrams for Information Storage and Retrieval

    Get PDF
    Technology is improving at an amazing pace and one reason for this advancement is because of unprecedented growth in the field of Information Technology and also in Digital Integrated Circuit technology over the past few decades. The size of a typical modern database is in the order of high ends of gigabytes and even terabytes. Researchers were successful in designing complex databases but there is still lot of activity on effectively making use of this stored information. There have been significant advancements in the field of Logic optimization and also in Information storage and retrieval but there has been very little transfer of these methods. The purpose of this study is to investigate the use of powerful Computer Aided Design (CAD) techniques for efficient information storage and retrieval. In the work presented in this thesis, it is shown that Decision Diagrams can be used for efficient data storage and information retrieval. An efficient technique is proposed for each of the two key areas of research in Database systems known as Query Optimization and Datamining . Encouraging results are obtained indicating that using hardware techniques for information processing can be a new approach for solving these problems. An SQL query is represented using a hardware data structure known as an AND/OR graph and an SQL parser is interfaced with AND/OR package to achieve query optimization. Optimization using AND/OR graphs works only in the Boolean domain and to make the process of query optimization more complete it has to be investigated in Multivalued domain. The possibility of using MDD as a data structure to represent the query in the multi valued domain is discussed and a synthesis technique is developed to synthesize Multi Valued Logic Networks using MDD. Another useful data structure known as BDD can be used to store the large transaction files used in datamining applications very effectively

    ์ดˆ๊ณ ์šฉ๋Ÿ‰ ์†”๋ฆฌ๋“œ ์Šคํ…Œ์ด๋“œ ๋“œ๋ผ์ด๋ธŒ๋ฅผ ์œ„ํ•œ ์‹ ๋ขฐ์„ฑ ํ–ฅ์ƒ ๋ฐ ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ธฐ์ˆ 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021.8. ๊น€์ง€ํ™.The development of ultra-large NAND flash storage devices (SSDs) is recently made possible by NAND flash memory semiconductor process scaling and multi-leveling techniques, and NAND package technology, which enables continuous increasing of storage capacity by mounting many NAND flash memory dies in an SSD. As the capacity of an SSD increases, the total cost of ownership of the storage system can be reduced very effectively, however due to limitations of ultra-large SSDs in reliability and performance, there exists some obstacles for ultra-large SSDs to be widely adopted. In order to take advantage of an ultra-large SSD, it is necessary to develop new techniques to improve these reliability and performance issues. In this dissertation, we propose various optimization techniques to solve the reliability and performance issues of ultra-large SSDs. In order to overcome the optimization limitations of the existing approaches, our techniques were designed based on various characteristic evaluation results of NAND flash devices and field failure characteristics analysis results of real SSDs. We first propose a low-stress erase technique for the purpose of reducing the characteristic deviation between wordlines (WLs) in a NAND flash block. By reducing the erase stress on weak WLs, it effectively slows down NAND degradation and improves NAND endurance. From the NAND evaluation results, the conditions that can most effectively guard the weak WLs are defined as the gerase mode. In addition, considering the user workload characteristics, we propose a technique to dynamically select the optimal gerase mode that can maximize the lifetime of the SSD. Secondly, we propose an integrated approach that maximizes the efficiency of copyback operations to improve performance while not compromising data reliability. Based on characterization using real 3D TLC flash chips, we propose a novel per-block error propagation model under consecutive copyback operations. Our model significantly increases the number of successive copybacks by exploiting the aging characteristics of NAND blocks. Furthermore, we devise a resource-efficient error management scheme that can handle successive copybacks where pages move around multiple blocks with different reliability. By utilizing proposed copyback operation for internal data movement, SSD performance can be effectively improved without any reliability issues. Finally, we propose a new recovery scheme, called reparo, for a RAID storage system with ultra-large SSDs. Unlike the existing RAID recovery schemes, reparo repairs a failed SSD at the NAND die granularity without replacing it with a new SSD, thus avoiding most of the inter-SSD data copies during a RAID recovery step. When a NAND die of an SSD fails, reparo exploits a multi-core processor of the SSD controller to identify failed LBAs from the failed NAND die and to recover data from the failed LBAs. Furthermore, reparo ensures no negative post-recovery impact on the performance and lifetime of the repaired SSD. In order to evaluate the effectiveness of the proposed techniques, we implemented them in a storage device prototype, an open NAND flash storage device development environment, and a real SSD environment. And their usefulness was verified using various benchmarks and I/O traces collected the from real-world applications. The experiment results show that the reliability and performance of the ultra-large SSD can be effectively improved through the proposed techniques.๋ฐ˜๋„์ฒด ๊ณต์ •์˜ ๋ฏธ์„ธํ™”, ๋‹ค์น˜ํ™” ๊ธฐ์ˆ ์— ์˜ํ•ด์„œ ์ง€์†์ ์œผ๋กœ ์šฉ๋Ÿ‰์ด ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๋Š” ๋‹จ์œ„ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ๋ฉ”๋ชจ๋ฆฌ์™€ ํ•˜๋‚˜์˜ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ๊ธฐ๋ฐ˜ ์Šคํ† ๋ฆฌ์ง€ ์‹œ์Šคํ…œ ๋‚ด์— ์ˆ˜ ๋งŽ์€ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ๋ฉ”๋ชจ๋ฆฌ ๋‹ค์ด๋ฅผ ์‹ค์žฅํ•  ์ˆ˜ ์žˆ๊ฒŒํ•˜๋Š” ๋‚ธ๋“œ ํŒจํ‚ค์ง€ ๊ธฐ์ˆ ๋กœ ์ธํ•ด ํ•˜๋“œ๋””์Šคํฌ๋ณด๋‹ค ํ›จ์”ฌ ๋” ํฐ ์ดˆ๊ณ ์šฉ๋Ÿ‰์˜ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ์ €์žฅ์žฅ์น˜์˜ ๊ฐœ๋ฐœ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ–ˆ๋‹ค. ํ”Œ๋ž˜์‰ฌ ์ €์žฅ์žฅ์น˜์˜ ์šฉ๋Ÿ‰์ด ์ฆ๊ฐ€ํ•  ์ˆ˜๋ก ์Šคํ† ๋ฆฌ์ง€ ์‹œ์Šคํ…œ์˜ ์ด ์†Œ์œ ๋น„์šฉ์„ ์ค„์ด๋Š”๋ฐ ๋งค์šฐ ํšจ๊ณผ์ ์ธ ์žฅ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋‚˜, ์‹ ๋ขฐ์„ฑ ๋ฐ ์„ฑ๋Šฅ์˜ ์ธก๋ฉด์—์„œ์˜ ํ•œ๊ณ„๋กœ ์ธํ•ด์„œ ์ดˆ๊ณ ์šฉ๋Ÿ‰ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ์ €์žฅ์žฅ์น˜๊ฐ€ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š”๋ฐ ์žˆ์–ด์„œ ์žฅ์• ๋ฌผ๋กœ ์ž‘์šฉํ•˜๊ณ  ์žˆ๋‹ค. ์ดˆ๊ณ ์šฉ๋Ÿ‰ ์ €์žฅ์žฅ์น˜์˜ ์žฅ์ ์„ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ด๋Ÿฌํ•œ ์‹ ๋ขฐ์„ฑ ๋ฐ ์„ฑ๋Šฅ์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๊ธฐ๋ฒ•์˜ ๊ฐœ๋ฐœ์ด ํ•„์š”ํ•˜๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ดˆ๊ณ ์šฉ๋Ÿ‰ ๋‚ธ๋“œ๊ธฐ๋ฐ˜ ์ €์žฅ์žฅ์น˜(SSD)์˜ ๋ฌธ์ œ์ ์ธ ์„ฑ๋Šฅ ๋ฐ ์‹ ๋ขฐ์„ฑ์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ์ตœ์ ํ™” ๊ธฐ์ˆ ์„ ์ œ์•ˆํ•œ๋‹ค. ๊ธฐ์กด ๊ธฐ๋ฒ•๋“ค์˜ ์ตœ์ ํ™” ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด์„œ, ์šฐ๋ฆฌ์˜ ๊ธฐ์ˆ ์€ ์‹ค์ œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ์†Œ์ž์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ํŠน์„ฑ ํ‰๊ฐ€ ๊ฒฐ๊ณผ์™€ SSD์˜ ํ˜„์žฅ ๋ถˆ๋Ÿ‰ ํŠน์„ฑ ๋ถ„์„๊ฒฐ๊ณผ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ณ ์•ˆ๋˜์—ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด์„œ ๋‚ธ๋“œ์˜ ํ”Œ๋ž˜์‰ฌ ํŠน์„ฑ๊ณผ SSD, ๊ทธ๋ฆฌ๊ณ  ํ˜ธ์ŠคํŠธ ์‹œ์Šคํ…œ์˜ ๋™์ž‘ ํŠน์„ฑ์„ ๊ณ ๋ คํ•œ ์„ฑ๋Šฅ ๋ฐ ์‹ ๋ขฐ์„ฑ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์‹œํ•œ๋‹ค. ์ฒซ์งธ๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ๋ถˆ๋ก๋‚ด์˜ ํŽ˜์ด์ง€๋“ค๊ฐ„์˜ ํŠน์„ฑํŽธ์ฐจ๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด์„œ ๋™์ ์ธ ์†Œ๊ฑฐ ์ŠคํŠธ๋ ˆ์Šค ๊ฒฝ๊ฐ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆ๋œ ๊ธฐ๋ฒ•์€ ๋‚ธ๋“œ ๋ธ”๋ก์˜ ๋‚ด๊ตฌ์„ฑ์„ ๋Š˜๋ฆฌ๊ธฐ ์œ„ํ•ด์„œ ํŠน์„ฑ์ด ์•ฝํ•œ ํŽ˜์ด์ง€๋“ค์— ๋Œ€ํ•ด์„œ ๋” ์ ์€ ์†Œ๊ฑฐ ์ŠคํŠธ๋ ˆ์Šค๊ฐ€ ์ธ๊ฐ€ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋‚ธ๋“œ ํ‰๊ฐ€ ๊ฒฐ๊ณผ๋กœ ๋ถ€ํ„ฐ ์†Œ๊ฑฐ ์ŠคํŠธ๋ ˆ์Šค ๊ฒฝ๊ฐ ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•œ๋‹ค. ๋˜ํ•œ ์‚ฌ์šฉ์ž ์›Œํฌ๋กœ๋“œ ํŠน์„ฑ์„ ๊ณ ๋ คํ•˜์—ฌ, ์†Œ๊ฑฐ ์ŠคํŠธ๋ ˆ์Šค ๊ฒฝ๊ฐ ๊ธฐ๋ฒ•์˜ ํšจ๊ณผ๊ฐ€ ์ตœ๋Œ€ํ™” ๋  ์ˆ˜ ์žˆ๋Š” ์ตœ์ ์˜ ๊ฒฝ๊ฐ ์ˆ˜์ค€์„ ๋™์ ์œผ๋กœ ํŒ๋‹จํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด์„œ ๋‚ธ๋“œ ๋ธ”๋ก์„ ์—ดํ™”์‹œํ‚ค๋Š” ์ฃผ์š” ์›์ธ์ธ ์†Œ๊ฑฐ ๋™์ž‘์„ ํšจ์œจ์ ์œผ๋กœ ์ œ์–ดํ•จ์œผ๋กœ์จ ์ €์žฅ์žฅ์น˜์˜ ์ˆ˜๋ช…์„ ํšจ๊ณผ์ ์œผ๋กœ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค. ๋‘˜์งธ๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ณ ์šฉ๋Ÿ‰ SSD์—์„œ์˜ ๋‚ด๋ถ€ ๋ฐ์ดํ„ฐ ์ด๋™์œผ๋กœ ์ธํ•œ ์„ฑ๋Šฅ ์ €ํ•˜๋ฌธ์ œ๋ฅผ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ์˜ ์ œํ•œ๋œ ์นดํ”ผ๋ฐฑ(copyback) ๋ช…๋ น์„ ํ™œ์šฉํ•˜๋Š” ์ ์‘ํ˜• ๊ธฐ๋ฒ•์ธ rCPB์„ ์ œ์•ˆํ•œ๋‹ค. rCPB์€ Copyback ๋ช…๋ น์˜ ํšจ์œจ์„ฑ์„ ๊ทน๋Œ€ํ™” ํ•˜๋ฉด์„œ๋„ ๋ฐ์ดํ„ฐ ์‹ ๋ขฐ์„ฑ์— ๋ฌธ์ œ๊ฐ€ ์—†๋„๋ก ๋‚ธ๋“œ์˜ ๋ธ”๋Ÿญ์˜ ๋…ธํ™”ํŠน์„ฑ์„ ๋ฐ˜์˜ํ•œ ์ƒˆ๋กœ์šด copyback ์˜ค๋ฅ˜ ์ „ํŒŒ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœํ•œ๋‹ค. ์ด์—๋”ํ•ด, ์‹ ๋ขฐ์„ฑ์ด ๋‹ค๋ฅธ ๋ธ”๋Ÿญ๊ฐ„์˜ copyback ๋ช…๋ น์„ ํ™œ์šฉํ•œ ๋ฐ์ดํ„ฐ ์ด๋™์„ ๋ฌธ์ œ์—†์ด ๊ด€๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด์„œ ์ž์› ํšจ์œจ์ ์ธ ์˜ค๋ฅ˜ ๊ด€๋ฆฌ ์ฒด๊ณ„๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด์„œ ์‹ ๋ขฐ์„ฑ์— ๋ฌธ์ œ๋ฅผ ์ฃผ์ง€ ์•Š๋Š” ์ˆ˜์ค€์—์„œ copyback์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜์—ฌ ๋‚ด๋ถ€ ๋ฐ์ดํ„ฐ ์ด๋™์„ ์ตœ์ ํ™” ํ•จ์œผ๋กœ์จ SSD์˜ ์„ฑ๋Šฅํ–ฅ์ƒ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ดˆ๊ณ ์šฉ๋Ÿ‰ SSD์—์„œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ์˜ ๋‹ค์ด(die) ๋ถˆ๋Ÿ‰์œผ๋กœ ์ธํ•œ ๋ ˆ์ด๋“œ(redundant array of independent disks, RAID) ๋ฆฌ๋นŒ๋“œ ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ตœ์†Œํ™” ํ•˜๊ธฐ์œ„ํ•œ ์ƒˆ๋กœ์šด RAID ๋ณต๊ตฌ ๊ธฐ๋ฒ•์ธ reparo๋ฅผ ์ œ์•ˆํ•œ๋‹ค. Reparo๋Š” SSD์— ๋Œ€ํ•œ ๊ต์ฒด์—†์ด SSD์˜ ๋ถˆ๋Ÿ‰ die์— ๋Œ€ํ•ด์„œ๋งŒ ๋ณต๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•จ์œผ๋กœ์จ ๋ณต๊ตฌ ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ตœ์†Œํ™”ํ•œ๋‹ค. ๋ถˆ๋Ÿ‰์ด ๋ฐœ์ƒํ•œ die์˜ ๋ฐ์ดํ„ฐ๋งŒ ์„ ๋ณ„์ ์œผ๋กœ ๋ณต๊ตฌํ•จ์œผ๋กœ์จ ๋ณต๊ตฌ ๊ณผ์ •์˜ ๋ฆฌ๋นŒ๋“œ ํŠธ๋ž˜ํ”ฝ์„ ์ตœ์†Œํ™”ํ•˜๋ฉฐ, SSD ๋‚ด๋ถ€์˜ ๋ณ‘๋ ฌ๊ตฌ์กฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋ถˆ๋Ÿ‰ die ๋ณต๊ตฌ ์‹œ๊ฐ„์„ ํšจ๊ณผ์ ์œผ๋กœ ๋‹จ์ถ•ํ•œ๋‹ค. ๋˜ํ•œ die ๋ถˆ๋Ÿ‰์œผ๋กœ ์ธํ•œ ๋ฌผ๋ฆฌ์  ๊ณต๊ฐ„๊ฐ์†Œ์˜ ๋ถ€์ž‘์šฉ์„ ์ตœ์†Œํ™” ํ•จ์œผ๋กœ์จ ๋ณต๊ตฌ ์ดํ›„์˜ ์„ฑ๋Šฅ ์ €ํ•˜ ๋ฐ ์ˆ˜๋ช…์˜ ๊ฐ์†Œ ๋ฌธ์ œ๊ฐ€ ์—†๋„๋ก ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ๊ธฐ๋ฒ•๋“ค์€ ์ €์žฅ์žฅ์น˜ ํ”„๋กœํ† ํƒ€์ž… ๋ฐ ๊ณต๊ฐœ ๋‚ธ๋“œ ํ”Œ๋ž˜์‰ฌ ์ €์žฅ์žฅ์น˜ ๊ฐœ๋ฐœํ™˜๊ฒฝ, ๊ทธ๋ฆฌ๊ณ  ์‹ค์žฅ SSDํ™˜๊ฒฝ์— ๊ตฌํ˜„๋˜์—ˆ์œผ๋ฉฐ, ์‹ค์ œ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์„ ๋ชจ์‚ฌํ•œ ๋‹ค์–‘ํ•œ ๋ฒคํŠธ๋งˆํฌ ๋ฐ ์‹ค์ œ I/O ํŠธ๋ ˆ์ด์Šค๋“ค์„ ์ด์šฉํ•˜์—ฌ ๊ทธ ์œ ์šฉ์„ฑ์„ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ์ œ์•ˆ๋œ ๊ธฐ๋ฒ•๋“ค์„ ํ†ตํ•ด์„œ ์ดˆ๊ณ ์šฉ๋Ÿ‰ SSD์˜ ์‹ ๋ขฐ์„ฑ ๋ฐ ์„ฑ๋Šฅ์„ ํšจ๊ณผ์ ์œผ๋กœ ๊ฐœ์„ ํ•  ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค.I Introduction 1 1.1 Motivation 1 1.2 Dissertation Goals 3 1.3 Contributions 5 1.4 Dissertation Structure 8 II Background 11 2.1 Overview of 3D NAND Flash Memory 11 2.2 Reliability Management in NAND Flash Memory 14 2.3 UL SSD architecture 15 2.4 Related Work 17 2.4.1 NAND endurance optimization by utilizing page characteristics difference 17 2.4.2 Performance optimizations using copyback operation 18 2.4.3 Optimizations for RAID Rebuild 19 2.4.4 Reliability improvement using internal RAID 20 III GuardedErase: Extending SSD Lifetimes by Protecting Weak Wordlines 22 3.1 Reliability Characterization of a 3D NAND Flash Block 22 3.1.1 Large Reliability Variations Among WLs 22 3.1.2 Erase Stress on Flash Reliability 26 3.2 GuardedErase: Design Overview and its Endurance Model 28 3.2.1 Basic Idea 28 3.2.2 Per-WL Low-Stress Erase Mode 31 3.2.3 Per-Block Erase Modes 35 3.3 Design and Implementation of LongFTL 39 3.3.1 Overview 39 3.3.2 Weak WL Detector 40 3.3.3 WAF Monitor 42 3.3.4 GErase Mode Selector 43 3.4 Experimental Results 46 3.4.1 Experimental Settings 46 3.4.2 Lifetime Improvement 47 3.4.3 Performance Overhead 49 3.4.4 Effectiveness of Lowest Erase Relief Ratio 50 IV Improving SSD Performance Using Adaptive Restricted- Copyback Operations 52 4.1 Motivations 52 4.1.1 Data Migration in Modern SSD 52 4.1.2 Need for Block Aging-Aware Copyback 53 4.2 RCPB: Copyback with a Limit 55 4.2.1 Error-Propagation Characteristics 55 4.2.2 RCPB Operation Model 58 4.3 Design and Implementation of rcFTL 59 4.3.1 EPM module 60 4.3.2 Data Migration Mode Selection 64 4.4 Experimental Results 65 4.4.1 Experimental Setup 65 4.4.2 Evaluation Results 66 V Reparo: A Fast RAID Recovery Scheme for Ultra- Large SSDs 70 5.1 SSD Failures: Causes and Characteristics 70 5.1.1 SSD Failure Types 70 5.1.2 SSD Failure Characteristics 72 5.2 Impact of UL SSDs on RAID Reliability 74 5.3 RAID Recovery using Reparo 77 5.3.1 Overview of Reparo 77 5.4 Cooperative Die Recovery 82 5.4.1 Identifier: Parallel Search of Failed LBAs 82 5.4.2 Handler: Per-Core Space Utilization Adjustment 83 5.5 Identifier Acceleration Using P2L Mapping Information 89 5.5.1 Page-level P2L Entrustment to Neighboring Die 90 5.5.2 Block-level P2L Entrustment to Neighboring Die 92 5.5.3 Additional Considerations for P2L Entrustment 94 5.6 Experimental Results 95 5.6.1 Experimental Settings 95 5.6.2 Experimental Results 97 VI Conclusions 109 6.1 Summary 109 6.2 Future Work 111 6.2.1 Optimization with Accurate WAF Prediction 111 6.2.2 Maximizing Copyback Threshold 111 6.2.3 Pre-failure Detection 112๋ฐ•

    Abmash: Mashing Up Legacy Web Applications by Automated Imitation of Human Actions

    Get PDF
    Many business web-based applications do not offer applications programming interfaces (APIs) to enable other applications to access their data and functions in a programmatic manner. This makes their composition difficult (for instance to synchronize data between two applications). To address this challenge, this paper presents Abmash, an approach to facilitate the integration of such legacy web applications by automatically imitating human interactions with them. By automatically interacting with the graphical user interface (GUI) of web applications, the system supports all forms of integrations including bi-directional interactions and is able to interact with AJAX-based applications. Furthermore, the integration programs are easy to write since they deal with end-user, visual user-interface elements. The integration code is simple enough to be called a "mashup".Comment: Software: Practice and Experience (2013)

    From a Comprehensive Experimental Survey to a Cost-based Selection Strategy for Lightweight Integer Compression Algorithms

    Get PDF
    Lightweight integer compression algorithms are frequently applied in in-memory database systems to tackle the growing gap between processor speed and main memory bandwidth. In recent years, the vectorization of basic techniques such as delta coding and null suppression has considerably enlarged the corpus of available algorithms. As a result, today there is a large number of algorithms to choose from, while different algorithms are tailored to different data characteristics. However, a comparative evaluation of these algorithms with different data and hardware characteristics has never been sufficiently conducted in the literature. To close this gap, we conducted an exhaustive experimental survey by evaluating several state-of-the-art lightweight integer compression algorithms as well as cascades of basic techniques. We systematically investigated the influence of data as well as hardware properties on the performance and the compression rates. The evaluated algorithms are based on publicly available implementations as well as our own vectorized reimplementations. We summarize our experimental findings leading to several new insights and to the conclusion that there is no single-best algorithm. Moreover, in this article, we also introduce and evaluate a novel cost model for the selection of a suitable lightweight integer compression algorithm for a given dataset
    • โ€ฆ
    corecore