6 research outputs found
When Do WOM Codes Improve the Erasure Factor in Flash Memories?
Flash memory is a write-once medium in which reprogramming cells requires
first erasing the block that contains them. The lifetime of the flash is a
function of the number of block erasures and can be as small as several
thousands. To reduce the number of block erasures, pages, which are the
smallest write unit, are rewritten out-of-place in the memory. A Write-once
memory (WOM) code is a coding scheme which enables to write multiple times to
the block before an erasure. However, these codes come with significant rate
loss. For example, the rate for writing twice (with the same rate) is at most
0.77.
In this paper, we study WOM codes and their tradeoff between rate loss and
reduction in the number of block erasures, when pages are written uniformly at
random. First, we introduce a new measure, called erasure factor, that reflects
both the number of block erasures and the amount of data that can be written on
each block. A key point in our analysis is that this tradeoff depends upon the
specific implementation of WOM codes in the memory. We consider two systems
that use WOM codes; a conventional scheme that was commonly used, and a new
recent design that preserves the overall storage capacity. While the first
system can improve the erasure factor only when the storage rate is at most
0.6442, we show that the second scheme always improves this figure of merit.Comment: to be presented at ISIT 201
CAWL: A Cache-aware Write Performance Model of Linux Systems
The performance of data intensive applications is often dominated by their
input/output (I/O) operations but the I/O stack of systems is complex and
severely depends on system specific settings and hardware components. This
situation makes generic performance optimisation challenging and costly for
developers as they would have to run their application on a large variety of
systems to evaluate their improvements. Here, simulation frameworks can help
reducing the experimental overhead but they typically handle the topic of I/O
rather coarse-grained, which leads to significant inaccuracies in performance
predictions. Here, we propose a more accurate model of the write performance of
Linux-based systems that takes different I/O methods and levels (via system
calls, library calls, direct or indirect, etc.), the page cache, background
writing, and the I/O throttling capabilities of the Linux kernel into account.
With our model, we reduce, for example, the relative prediction error compared
to a standard I/O model included in SimGrid for a random I/O scenario from 67 %
down to 10 % relative error against real measurements of the simulated
workload. In other scenarios the differences are even more pronounced.Comment: 22 pages, 9 figures, 1 tabl
RAID Organizations for Improved Reliability and Performance: A Not Entirely Unbiased Tutorial (1st revision)
RAID proposal advocated replacing large disks with arrays of PC disks, but as
the capacity of small disks increased 100-fold in 1990s the production of large
disks was discontinued. Storage dependability is increased via replication or
erasure coding. Cloud storage providers store multiple copies of data obviating
for need for further redundancy. Varitaions of RAID based on local recovery
codes, partial MDS reduce recovery cost. NAND flash Solid State Disks - SSDs
have low latency and high bandwidth, are more reliable, consume less power and
have a lower TCO than Hard Disk Drives, which are more viable for hyperscalers.Comment: Submitted to ACM Computing Surveys. arXiv admin note: substantial
text overlap with arXiv:2306.0876
異種の不揮発性メモリで構成される半導体ストレージシステムに関する研究
【学位授与の要件】中央大学学位規則第4条第1項【論文審査委員主査】竹内 健 (中央大学理工学部教授)【論文審査委員副査】山村 清隆(中央大学理工学部教授)、築山 修治(中央大学理工学部教授)、首藤 一幸(東京工業大学大学院情報理工学研究科准教授)博士(工学)中央大