110,428 research outputs found
Rewriting Flash Memories by Message Passing
This paper constructs WOM codes that combine rewriting and error correction
for mitigating the reliability and the endurance problems in flash memory. We
consider a rewriting model that is of practical interest to flash applications
where only the second write uses WOM codes. Our WOM code construction is based
on binary erasure quantization with LDGM codes, where the rewriting uses
message passing and has potential to share the efficient hardware
implementations with LDPC codes in practice. We show that the coding scheme
achieves the capacity of the rewriting model. Extensive simulations show that
the rewriting performance of our scheme compares favorably with that of polar
WOM code in the rate region where high rewriting success probability is
desired. We further augment our coding schemes with error correction
capability. By drawing a connection to the conjugate code pairs studied in the
context of quantum error correction, we develop a general framework for
constructing error-correction WOM codes. Under this framework, we give an
explicit construction of WOM codes whose codewords are contained in BCH codes.Comment: Submitted to ISIT 201
Rank-Modulation Rewrite Coding for Flash Memories
The current flash memory technology focuses on the cost minimization of its static storage capacity. However, the resulting approach supports a relatively small number of program-erase cycles. This technology is effective for consumer devices (e.g., smartphones and cameras) where the number of program-erase cycles is small. However, it is not economical for enterprise storage systems that require a large number of lifetime writes. The proposed approach in this paper for alleviating this problem consists of the efficient integration of two key ideas: 1) improving reliability and endurance by representing the information using relative values via the rank modulation scheme and 2) increasing the overall (lifetime) capacity of the flash device via rewriting codes, namely, performing multiple writes per cell before erasure. This paper presents a new coding scheme that combines rank-modulation with rewriting. The key benefits of the new scheme include: 1) the ability to store close to 2 bit per cell on each write with minimal impact on the lifetime of the memory and 2) efficient encoding and decoding algorithms that make use of capacity-achieving write-once-memory codes that were proposed recently
Customized Nios II multi-cycle instructions to accelerate block-matching techniques
This study focuses on accelerating the optimization of motion estimation algorithms, which are widely used in video coding standards, by using both the paradigm based on Altera Custom Instructions as well as the efficient combination of SDRAM and On-Chip memory of Nios II processor. Firstly, a complete code profiling is carried out before the optimization in order to detect time leaking affecting the motion compensation algorithms. Then, a multi-cycle Custom Instruction which will be added to the specific embedded design is implemented. The approach deployed is based on optimizing SOC performance by using an efficient combination of On-Chip memory and SDRAM with regards to the reset vector, exception vector, stack, heap, read/write data (.rwdata), read only data (.rodata), and program text (.text) in the design. Furthermore, this approach aims to enhance the said algorithms by incorporating Custom Instructions in the Nios II ISA. Finally, the efficient combination of both methods is then developed to build the final embedded system. The present contribution thus facilitates motion coding for low-cost Soft-Core microprocessors, particularly the RISC architecture of Nios II implemented in FPGA. It enables us to construct an SOC which processes 50Ă—50 @ 180 fps
New Algorithms and Lower Bounds for Sequential-Access Data Compression
This thesis concerns sequential-access data compression, i.e., by algorithms
that read the input one or more times from beginning to end. In one chapter we
consider adaptive prefix coding, for which we must read the input character by
character, outputting each character's self-delimiting codeword before reading
the next one. We show how to encode and decode each character in constant
worst-case time while producing an encoding whose length is worst-case optimal.
In another chapter we consider one-pass compression with memory bounded in
terms of the alphabet size and context length, and prove a nearly tight
tradeoff between the amount of memory we can use and the quality of the
compression we can achieve. In a third chapter we consider compression in the
read/write streams model, which allows us passes and memory both
polylogarithmic in the size of the input. We first show how to achieve
universal compression using only one pass over one stream. We then show that
one stream is not sufficient for achieving good grammar-based compression.
Finally, we show that two streams are necessary and sufficient for achieving
entropy-only bounds.Comment: draft of PhD thesi
- …