1,006 research outputs found
Rewriting Flash Memories by Message Passing
This paper constructs WOM codes that combine rewriting and error correction
for mitigating the reliability and the endurance problems in flash memory. We
consider a rewriting model that is of practical interest to flash applications
where only the second write uses WOM codes. Our WOM code construction is based
on binary erasure quantization with LDGM codes, where the rewriting uses
message passing and has potential to share the efficient hardware
implementations with LDPC codes in practice. We show that the coding scheme
achieves the capacity of the rewriting model. Extensive simulations show that
the rewriting performance of our scheme compares favorably with that of polar
WOM code in the rate region where high rewriting success probability is
desired. We further augment our coding schemes with error correction
capability. By drawing a connection to the conjugate code pairs studied in the
context of quantum error correction, we develop a general framework for
constructing error-correction WOM codes. Under this framework, we give an
explicit construction of WOM codes whose codewords are contained in BCH codes.Comment: Submitted to ISIT 201
Low-Power Cooling Codes with Efficient Encoding and Decoding
A class of low-power cooling (LPC) codes, to control simultaneously both the
peak temperature and the average power consumption of interconnects, was
introduced recently. An -LPC code is a coding scheme over wires
that (A) avoids state transitions on the hottest wires (cooling), and (B)
limits the number of transitions to in each transmission (low-power).
A few constructions for large LPC codes that have efficient encoding and
decoding schemes, are given. In particular, when is fixed, we construct LPC
codes of size and show that these LPC codes can be modified to
correct errors efficiently. We further present a construction for large LPC
codes based on a mapping from cooling codes to LPC codes. The efficiency of the
encoding/decoding for the constructed LPC codes depends on the efficiency of
the decoding/encoding for the related cooling codes and the ones for the
mapping
Algebraic and Combinatorial Methods in Computational Complexity
At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings
Rank-Modulation Rewrite Coding for Flash Memories
The current flash memory technology focuses on the cost minimization of its static storage capacity. However, the resulting approach supports a relatively small number of program-erase cycles. This technology is effective for consumer devices (e.g., smartphones and cameras) where the number of program-erase cycles is small. However, it is not economical for enterprise storage systems that require a large number of lifetime writes. The proposed approach in this paper for alleviating this problem consists of the efficient integration of two key ideas: 1) improving reliability and endurance by representing the information using relative values via the rank modulation scheme and 2) increasing the overall (lifetime) capacity of the flash device via rewriting codes, namely, performing multiple writes per cell before erasure. This paper presents a new coding scheme that combines rank-modulation with rewriting. The key benefits of the new scheme include: 1) the ability to store close to 2 bit per cell on each write with minimal impact on the lifetime of the memory and 2) efficient encoding and decoding algorithms that make use of capacity-achieving write-once-memory codes that were proposed recently
Capacity-Achieving Coding Mechanisms: Spatial Coupling and Group Symmetries
The broad theme of this work is in constructing optimal transmission mechanisms for a wide variety of communication systems. In particular, this dissertation provides a proof of threshold saturation for spatially-coupled codes, low-complexity capacity-achieving coding schemes for side-information problems, a proof that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels, and a mathematical framework to design delay sensitive communication systems.
Spatially-coupled codes are a class of codes on graphs that are shown to achieve capacity universally over binary symmetric memoryless channels (BMS) under belief-propagation decoder. The underlying phenomenon behind spatial coupling, known as βthreshold saturation via spatial couplingβ, turns out to be general and this technique has been applied to a wide variety of systems. In this work, a proof of the threshold saturation phenomenon is provided for irregular low-density parity-check (LDPC) and low-density generator-matrix (LDGM) ensembles on BMS channels. This proof is far simpler than published alternative proofs and it remains as the only technique to handle irregular and LDGM codes. Also, low-complexity capacity-achieving codes are constructed for three coding problems via spatial coupling: 1) rate distortion with side-information, 2) channel coding with side-information, and 3) write-once memory system. All these schemes are based on spatially coupling compound LDGM/LDPC ensembles.
Reed-Muller and Bose-Chaudhuri-Hocquengham (BCH) are well-known algebraic codes introduced more than 50 years ago. While these codes are studied extensively in the literature it wasnβt known whether these codes achieve capacity. This work introduces a technique to show that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels under maximum a posteriori (MAP) decoding. Instead of relying on the weight enumerators or other precise details of these codes, this technique requires that these codes have highly symmetric permutation groups. In fact, any sequence of linear codes with increasing blocklengths whose rates converge to a number between 0 and 1, and whose permutation groups are doubly transitive achieve capacity on erasure channels under bit-MAP decoding. This pro-vides a rare example in information theory where symmetry alone is suο¬cient to achieve capacity.
While the channel capacity provides a useful benchmark for practical design, communication systems of the day also demand small latency and other link layer metrics. Such delay sensitive communication systems are studied in this work, where a mathematical framework is developed to provide insights into the optimal design of these systems
Particle Merging Algorithm for PIC Codes
Particle-in-cell merging algorithms aim to resample dynamically the
six-dimensional phase space occupied by particles without distorting
substantially the physical description of the system. Whereas various
approaches have been proposed in previous works, none of them seemed to be able
to conserve fully charge, momentum, energy and their associated distributions.
We describe here an alternative algorithm based on the coalescence of N massive
or massless particles, considered to be close enough in phase space, into two
new macro-particles. The local conservation of charge, momentum and energy are
ensured by the resolution of a system of scalar equations. Various simulation
comparisons have been carried out with and without the merging algorithm, from
classical plasma physics problems to extreme scenarios where quantum
electrodynamics is taken into account, showing in addition to the conservation
of local quantities, the good reproducibility of the particle distributions. In
case where the number of particles ought to increase exponentially in the
simulation box, the dynamical merging permits a considerable speedup, and
significant memory savings that otherwise would make the simulations impossible
to perform
ΠΠ½Π°Π»ΠΈΠ· ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ ΠΊΠ°ΡΠΊΠ°Π΄Π½ΠΎΠ³ΠΎ ΠΊΠΎΠ΄ΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π΄Π»Ρ ΠΏΠΎΠ²ΡΡΠ΅Π½ΠΈΡ Π²ΡΠ½ΠΎΡΠ»ΠΈΠ²ΠΎΡΡΠΈ ΠΌΠ½ΠΎΠ³ΠΎΡΡΠΎΠ²Π½Π΅Π²ΠΎΠΉ NAND ΡΠ»Π΅Ρ-ΠΏΠ°ΠΌΡΡΠΈ
The increasing storage density of modern NAND flash memory chips, achieved both due to scaling down the cell size, and due to the increasing number of used cell states, leads to a decrease in data storage reliability, namely, error probability, endurance (number of P/E cycling) and retention time. Error correction codes are often used to improve the reliability of data storage in multilevel flash memory. The effectiveness of using error correction codes is largely determined by the model accuracy that exhibits the basic processes associated with writing and reading data. The paper describes the main sources of disturbances for a flash cell that affect the threshold voltage of the cell in NAND flash memory, and represents an explicit form of the threshold voltage distribution. As an approximation of the obtained threshold voltage distribution, a Normal-Laplace mixture model was shown to be a good fit in multilevel flash memories for a large number of rewriting cycles. For this model, a performance analysis of the concatenated coding scheme with an outer Reed-Solomon code and an inner multilevel code consisting of binary component codes is carried out. The performed analysis makes it possible to obtain tradeoffs between the error probability, storage density, and the number of P/E cycling. The resulting tradeoffs show that the considered concatenated coding schemes allow, due to a very slight decrease in the storage density, to increase the number of P/E cycling up to 2β2.5 times than their nominal endurance specification while maintaining the required value of the bit error probability.ΠΠΎΠ²ΡΡΠ΅Π½ΠΈΠ΅ ΠΏΠ»ΠΎΡΠ½ΠΎΡΡΠΈ Π·Π°ΠΏΠΈΡΠΈ Π² ΡΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ
ΡΠΈΠΏΠ°Ρ
NAND ΡΠ»Π΅Ρ-ΠΏΠ°ΠΌΡΡΠΈ, Π΄ΠΎΡΡΠΈΠ³Π°Π΅ΠΌΠΎΠ΅ ΠΊΠ°ΠΊ Π·Π° ΡΡΠ΅Ρ ΡΠΌΠ΅Π½ΡΡΠ°ΡΡΠ΅Π³ΠΎΡΡ ΡΠΈΠ·ΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΡΠ°Π·ΠΌΠ΅ΡΠ° ΡΡΠ΅ΠΉΠΊΠΈ, ΡΠ°ΠΊ ΠΈ Π±Π»Π°Π³ΠΎΠ΄Π°ΡΡ Π²ΠΎΠ·ΡΠ°ΡΡΠ°ΡΡΠ΅ΠΌΡ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ
ΡΠΎΡΡΠΎΡΠ½ΠΈΠΉ ΡΡΠ΅ΠΉΠΊΠΈ, ΡΠΎΠΏΡΠΎΠ²ΠΎΠΆΠ΄Π°Π΅ΡΡΡ ΡΠ½ΠΈΠΆΠ΅Π½ΠΈΠ΅ΠΌ Π½Π°Π΄Π΅ΠΆΠ½ΠΎΡΡΠΈ Ρ
ΡΠ°Π½Π΅Π½ΠΈΡ Π΄Π°Π½Π½ΡΡ
β Π²Π΅ΡΠΎΡΡΠ½ΠΎΡΡΠΈ ΠΎΡΠΈΠ±ΠΊΠΈ, Π²ΡΠ½ΠΎΡΠ»ΠΈΠ²ΠΎΡΡΠΈ (ΡΠΈΡΠ»Π° ΡΠΈΠΊΠ»ΠΎΠ² ΠΏΠ΅ΡΠ΅Π·Π°ΠΏΠΈΡΠΈ) ΠΈ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ Ρ
ΡΠ°Π½Π΅Π½ΠΈΡ. Π‘ΡΠ°Π½Π΄Π°ΡΡΠ½ΡΠΌ ΡΠ΅ΡΠ΅Π½ΠΈΠ΅ΠΌ, ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡΡΠΈΠΌ ΠΏΠΎΠ²ΡΡΠΈΡΡ Π½Π°Π΄Π΅ΠΆΠ½ΠΎΡΡΡ Ρ
ΡΠ°Π½Π΅Π½ΠΈΡ Π΄Π°Π½Π½ΡΡ
Π² ΠΌΠ½ΠΎΠ³ΠΎΡΡΠΎΠ²Π½Π΅Π²ΠΎΠΉ ΡΠ»Π΅Ρ-ΠΏΠ°ΠΌΡΡΠΈ, ΡΠ²Π»ΡΠ΅ΡΡΡ Π²Π²Π΅Π΄Π΅Π½ΠΈΠ΅ ΠΏΠΎΠΌΠ΅Ρ
ΠΎΡΡΡΠΎΠΉΡΠΈΠ²ΠΎΠ³ΠΎ ΠΊΠΎΠ΄ΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΡ Π²Π²Π΅Π΄Π΅Π½ΠΈΡ ΠΏΠΎΠΌΠ΅Ρ
ΠΎΡΡΡΠΎΠΉΡΠΈΠ²ΠΎΠ³ΠΎ ΠΊΠΎΠ΄ΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π² ΡΡΡΠ΅ΡΡΠ²Π΅Π½Π½ΠΎΠΉ ΡΡΠ΅ΠΏΠ΅Π½ΠΈ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅ΡΡΡ Π°Π΄Π΅ΠΊΠ²Π°ΡΠ½ΠΎΡΡΡΡ ΠΌΠΎΠ΄Π΅Π»ΠΈ, ΡΠΎΡΠΌΠ°Π»ΠΈΠ·ΡΡΡΠ΅ΠΉ ΠΎΡΠ½ΠΎΠ²Π½ΡΠ΅ ΠΏΡΠΎΡΠ΅ΡΡΡ, ΡΠ²ΡΠ·Π°Π½Π½ΡΠ΅ Ρ Π·Π°ΠΏΠΈΡΡΡ ΠΈ ΡΡΠ΅Π½ΠΈΠ΅ΠΌ Π΄Π°Π½Π½ΡΡ
. Π ΡΠ°Π±ΠΎΡΠ΅ ΠΏΡΠΈΠ²ΠΎΠ΄ΠΈΡΡΡ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΠΎΡΠ½ΠΎΠ²Π½ΡΡ
ΠΈΡΠΊΠ°ΠΆΠ΅Π½ΠΈΠΉ, ΡΠΎΠΏΡΠΎΠ²ΠΎΠΆΠ΄Π°ΡΡΠΈΡ
ΠΏΡΠΎΡΠ΅ΡΡ Π·Π°ΠΏΠΈΡΠΈ/ΡΡΠΈΡΡΠ²Π°Π½ΠΈΡ Π² NAND ΡΠ»Π΅Ρ-ΠΏΠ°ΠΌΡΡΠΈ, ΠΈ ΡΠ²Π½ΡΠΉ Π²ΠΈΠ΄ ΠΏΠ»ΠΎΡΠ½ΠΎΡΡΠ΅ΠΉ ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΡΠ΅Π·ΡΠ»ΡΡΠΈΡΡΡΡΠ΅Π³ΠΎ ΡΡΠΌΠ°. Π ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ Π°ΠΏΠΏΡΠΎΠΊΡΠΈΠΌΠ°ΡΠΈΠΈ ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΡΡ
ΠΏΠ»ΠΎΡΠ½ΠΎΡΡΠ΅ΠΉ ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΡΠ΅Π·ΡΠ»ΡΡΠΈΡΡΡΡΠ΅Π³ΠΎ ΡΡΠΌΠ° ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°Π΅ΡΡΡ ΠΌΠΎΠ΄Π΅Π»Ρ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΊΠΎΠΌΠΏΠΎΠ·ΠΈΡΠΈΠΈ Π³Π°ΡΡΡΠΎΠ²Π° ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΠΈ ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΠΠ°ΠΏΠ»Π°ΡΠ°, Π΄ΠΎΡΡΠ°ΡΠΎΡΠ½ΠΎ Π°Π΄Π΅ΠΊΠ²Π°ΡΠ½ΠΎ ΠΎΡΡΠ°ΠΆΠ°ΡΡΠ°Ρ ΠΏΠ»ΠΎΡΠ½ΠΎΡΡΠΈ ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΡΠ΅Π·ΡΠ»ΡΡΠΈΡΡΡΡΠ΅Π³ΠΎ ΡΡΠΌΠ° ΠΏΡΠΈ Π±ΠΎΠ»ΡΡΠΎΠΌ ΡΠΈΡΠ»Π΅ ΡΠΈΠΊΠ»ΠΎΠ² ΠΏΠ΅ΡΠ΅Π·Π°ΠΏΠΈΡΠΈ. ΠΠ»Ρ ΡΡΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ ΠΏΡΠΎΠ²ΠΎΠ΄ΠΈΡΡΡ Π°Π½Π°Π»ΠΈΠ· ΠΏΠΎΠΌΠ΅Ρ
ΠΎΡΡΡΠΎΠΉΡΠΈΠ²ΠΎΡΡΠΈ ΠΊΠ°ΡΠΊΠ°Π΄Π½ΡΡ
ΠΊΠΎΠ΄ΠΎΠ²ΡΡ
ΠΊΠΎΠ½ΡΡΡΡΠΊΡΠΈΠΉ Ρ Π²Π½Π΅ΡΠ½ΠΈΠΌ ΠΊΠΎΠ΄ΠΎΠΌ Π ΠΈΠ΄Π°-Π‘ΠΎΠ»ΠΎΠΌΠΎΠ½Π° ΠΈ Π²Π½ΡΡΡΠ΅Π½Π½ΠΈΠΌ ΠΌΠ½ΠΎΠ³ΠΎΡΡΠΎΠ²Π½Π΅Π²ΡΠΌ ΠΊΠΎΠ΄ΠΎΠΌ, ΡΠΎΡΡΠΎΡΡΠΈΠΌ ΠΈΠ· Π΄Π²ΠΎΠΈΡΠ½ΡΡ
ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΠ½ΡΡ
ΠΊΠΎΠ΄ΠΎΠ². ΠΡΠΏΠΎΠ»Π½Π΅Π½Π½ΡΠΉ Π°Π½Π°Π»ΠΈΠ· ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ ΠΏΠΎΠ»ΡΡΠΈΡΡ ΠΎΠ±ΠΌΠ΅Π½Π½ΡΠ΅ ΡΠΎΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡ ΠΌΠ΅ΠΆΠ΄Ρ Π²Π΅ΡΠΎΡΡΠ½ΠΎΡΡΡΡ ΠΎΡΠΈΠ±ΠΊΠΈ, ΠΏΠ»ΠΎΡΠ½ΠΎΡΡΡΡ Π·Π°ΠΏΠΈΡΠΈ ΠΈ ΡΠΈΡΠ»ΠΎΠΌ ΡΠΈΠΊΠ»ΠΎΠ² ΠΏΠ΅ΡΠ΅Π·Π°ΠΏΠΈΡΠΈ. ΠΠΎΠ»ΡΡΠ΅Π½Π½ΡΠ΅ ΠΎΠ±ΠΌΠ΅Π½Π½ΡΠ΅ ΡΠΎΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡ ΠΏΠΎΠΊΠ°Π·ΡΠ²Π°ΡΡ, ΡΡΠΎ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΡΠ΅ ΠΊΠΎΠ½ΡΡΡΡΠΊΡΠΈΠΈ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡΡ Π·Π° ΡΡΠ΅Ρ ΠΎΡΠ΅Π½Ρ Π½Π΅Π·Π½Π°ΡΠΈΡΠ΅Π»ΡΠ½ΠΎΠ³ΠΎ ΡΠ½ΠΈΠΆΠ΅Π½ΠΈΡ ΠΏΠ»ΠΎΡΠ½ΠΎΡΡΠΈ Π·Π°ΠΏΠΈΡΠΈ ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠΈΡΡ ΡΠ²Π΅Π»ΠΈΡΠ΅Π½ΠΈΠ΅ Π³ΡΠ°Π½ΠΈΡΠ½ΠΎΠ³ΠΎ Π·Π½Π°ΡΠ΅Π½ΠΈΡ ΡΠΈΡΠ»Π° ΡΠΈΠΊΠ»ΠΎΠ² ΠΏΠ΅ΡΠ΅Π·Π°ΠΏΠΈΡΠΈ (ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅ΠΌΠΎΠ³ΠΎ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»Π΅ΠΌ) Π² 2β2.5 ΡΠ°Π·Π° ΠΏΡΠΈ ΡΠΎΡ
ΡΠ°Π½Π΅Π½ΠΈΠΈ ΡΡΠ΅Π±ΡΠ΅ΠΌΠΎΠ³ΠΎ Π·Π½Π°ΡΠ΅Π½ΠΈΡ Π²Π΅ΡΠΎΡΡΠ½ΠΎΡΡΠΈ ΠΎΡΠΈΠ±ΠΊΠΈ Π½Π° Π±ΠΈΡ
- β¦