65,382 research outputs found
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots
On Coding Efficiency for Flash Memories
Recently, flash memories have become a competitive solution for mass storage.
The flash memories have rather different properties compared with the rotary
hard drives. That is, the writing of flash memories is constrained, and flash
memories can endure only limited numbers of erases. Therefore, the design goals
for the flash memory systems are quite different from these for other memory
systems. In this paper, we consider the problem of coding efficiency. We define
the "coding-efficiency" as the amount of information that one flash memory cell
can be used to record per cost. Because each flash memory cell can endure a
roughly fixed number of erases, the cost of data recording can be well-defined.
We define "payload" as the amount of information that one flash memory cell can
represent at a particular moment. By using information-theoretic arguments, we
prove a coding theorem for achievable coding rates. We prove an upper and lower
bound for coding efficiency. We show in this paper that there exists a
fundamental trade-off between "payload" and "coding efficiency". The results in
this paper may provide useful insights on the design of future flash memory
systems.Comment: accepted for publication in the Proceeding of the 35th IEEE Sarnoff
Symposium, Newark, New Jersey, May 21-22, 201
On the Network-Wide Gain of Memory-Assisted Source Coding
Several studies have identified a significant amount of redundancy in the
network traffic. For example, it is demonstrated that there is a great amount
of redundancy within the content of a server over time. This redundancy can be
leveraged to reduce the network flow by the deployment of memory units in the
network. The question that arises is whether or not the deployment of memory
can result in a fundamental improvement in the performance of the network. In
this paper, we answer this question affirmatively by first establishing the
fundamental gains of memory-assisted source compression and then applying the
technique to a network. Specifically, we investigate the gain of
memory-assisted compression in random network graphs consisted of a single
source and several randomly selected memory units. We find a threshold value
for the number of memories deployed in a random graph and show that if the
number of memories exceeds the threshold we observe network-wide reduction in
the traffic.Comment: To appear in 2011 IEEE Information Theory Workshop (ITW 2011
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff
Replicating or caching popular content in memories distributed across the
network is a technique to reduce peak network loads. Conventionally, the main
performance gain of this caching was thought to result from making part of the
requested data available closer to end users. Instead, we recently showed that
a much more significant gain can be achieved by using caches to create
coded-multicasting opportunities, even for users with different demands,
through coding across data streams. These coded-multicasting opportunities are
enabled by careful content overlap at the various caches in the network,
created by a central coordinating server.
In many scenarios, such a central coordinating server may not be available,
raising the question if this multicasting gain can still be achieved in a more
decentralized setting. In this paper, we propose an efficient caching scheme,
in which the content placement is performed in a decentralized manner. In other
words, no coordination is required for the content placement. Despite this lack
of coordination, the proposed scheme is nevertheless able to create
coded-multicasting opportunities and achieves a rate close to the optimal
centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin
- …