9,049 research outputs found
Time-Space Constrained Codes for Phase-Change Memories
Phase-change memory (PCM) is a promising non-volatile solid-state memory
technology. A PCM cell stores data by using its amorphous and crystalline
states. The cell changes between these two states using high temperature.
However, since the cells are sensitive to high temperature, it is important,
when programming cells, to balance the heat both in time and space.
In this paper, we study the time-space constraint for PCM, which was
originally proposed by Jiang et al. A code is called an
\emph{-constrained code} if for any consecutive
rewrites and for any segment of contiguous cells, the total rewrite
cost of the cells over those rewrites is at most . Here,
the cells are binary and the rewrite cost is defined to be the Hamming distance
between the current and next memory states. First, we show a general upper
bound on the achievable rate of these codes which extends the results of Jiang
et al. Then, we generalize their construction for -constrained codes and show another construction for -constrained codes. Finally, we show that these two
constructions can be used to construct codes for all values of ,
, and
An Iteratively Decodable Tensor Product Code with Application to Data Storage
The error pattern correcting code (EPCC) can be constructed to provide a
syndrome decoding table targeting the dominant error events of an inter-symbol
interference channel at the output of the Viterbi detector. For the size of the
syndrome table to be manageable and the list of possible error events to be
reasonable in size, the codeword length of EPCC needs to be short enough.
However, the rate of such a short length code will be too low for hard drive
applications. To accommodate the required large redundancy, it is possible to
record only a highly compressed function of the parity bits of EPCC's tensor
product with a symbol correcting code. In this paper, we show that the proposed
tensor error-pattern correcting code (T-EPCC) is linear time encodable and also
devise a low-complexity soft iterative decoding algorithm for EPCC's tensor
product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that
T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a
1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB
T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same
decoder complexity.Comment: Hakim Alhussien, Jaekyun Moon, "An Iteratively Decodable Tensor
Product Code with Application to Data Storage
CHANNEL CODING TECHNIQUES FOR A MULTIPLE TRACK DIGITAL MAGNETIC RECORDING SYSTEM
In magnetic recording greater area) bit packing densities are achieved through increasing
track density by reducing space between and width of the recording tracks, and/or
reducing the wavelength of the recorded information. This leads to the requirement of
higher precision tape transport mechanisms and dedicated coding circuitry.
A TMS320 10 digital signal processor is applied to a standard low-cost, low precision,
multiple-track, compact cassette tape recording system. Advanced signal processing and
coding techniques are employed to maximise recording density and to compensate for
the mechanical deficiencies of this system. Parallel software encoding/decoding
algorithms have been developed for several Run-Length Limited modulation codes. The
results for a peak detection system show that Bi-Phase L code can be reliably employed
up to a data rate of 5kbits/second/track. Development of a second system employing a
TMS32025 and sampling detection permitted the utilisation of adaptive equalisation to
slim the readback pulse. Application of conventional read equalisation techniques, that
oppose inter-symbol interference, resulted in a 30% increase in performance.
Further investigation shows that greater linear recording densities can be achieved by
employing Partial Response signalling and Maximum Likelihood Detection. Partial
response signalling schemes use controlled inter-symbol interference to increase
recording density at the expense of a multi-level read back waveform which results in an
increased noise penalty. Maximum Likelihood Sequence detection employs soft
decisions on the readback waveform to recover this loss. The associated modulation
coding techniques required for optimised operation of such a system are discussed.
Two-dimensional run-length-limited (d, ky) modulation codes provide a further means of
increasing storage capacity in multi-track recording systems. For example the code rate
of a single track run length-limited code with constraints (1, 3), such as Miller code, can
be increased by over 25% when using a 4-track two-dimensional code with the same d
constraint and with the k constraint satisfied across a number of parallel channels. The k
constraint along an individual track, kx, can be increased without loss of clock
synchronisation since the clocking information derived by frequent signal transitions
can be sub-divided across a number of, y, parallel tracks in terms of a ky constraint. This
permits more code words to be generated for a given (d, k) constraint in two dimensions
than is possible in one dimension. This coding technique is furthered by development of
a reverse enumeration scheme based on the trellis description of the (d, ky) constraints.
The application of a two-dimensional code to a high linear density system employing
extended class IV partial response signalling and maximum likelihood detection is
proposed. Finally, additional coding constraints to improve spectral response and error
performance are discussed.Hewlett Packard, Computer Peripherals Division (Bristol
An RLL code design that maximises channel utilisation
Comprehensive (d,k) sequences study is presented, complemented with the design of a new, efficient, Run-Length Limited (RLL) code. The new code belongs to group of constrained coding schemas with a coding rate of R = 2/5 and with the minimum run length between two successive transitions equal to 4. Presented RLL (4, oo) code uses channel capacity highly efficiently, with 98.7% and consequently it achieves a high-density rate of DR = 2.0. It is implying that two bits can be recorded, or transmitted with one transition. Coding techniques based on the presented constraints and the selected coding rate have better efficiency than many other currently used codes for high density optical recording and transmission
Asymmetric LOCO Codes: Constrained Codes for Flash Memories
In data storage and data transmission, certain patterns are more likely to be
subject to error when written (transmitted) onto the media. In magnetic
recording systems with binary data and bipolar non-return-to-zero signaling,
patterns that have insufficient separation between consecutive transitions
exacerbate inter-symbol interference. Constrained codes are used to eliminate
such error-prone patterns. A recent example is a new family of
capacity-achieving constrained codes, named lexicographically-ordered
constrained codes (LOCO codes). LOCO codes are symmetric, that is, the set of
forbidden patterns is closed under taking pattern complements. LOCO codes are
suboptimal in terms of rate when used in Flash devices where block erasure is
employed since the complement of an error-prone pattern is not detrimental in
these devices. This paper introduces asymmetric LOCO codes (A-LOCO codes),
which are lexicographically-ordered constrained codes that forbid only those
patterns that are detrimental for Flash performance. A-LOCO codes are also
capacity-achieving, and at finite-lengths, they offer higher rates than the
available state-of-the-art constrained codes designed for the same goal. The
mapping-demapping between the index and the codeword in A-LOCO codes allows
low-complexity encoding and decoding algorithms that are simpler than their
LOCO counterparts.Comment: 9 pages (double column), 0 figures, accepted at the Annual Allerton
Conference on Communication, Control, and Computin
On Coding and Detection Techniques for Two-Dimensional Magnetic Recording
Edited version embargoed until 15.04.2020
Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 15/04/2019 by AS, Doctoral CollegeThe areal density growth of magnetic recording systems is fast approaching the superparamagnetic limit for conventional magnetic disks. This is due to the increasing demand for high data storage capacity. Two-dimensional Magnetic Recording (TDMR) is a new technology aimed at increasing the areal density of magnetic recording systems beyond the limit of current disk technology using conventional disk media. However, it relies on advanced coding and signal processing techniques to achieve areal density gains. Current state of the art signal processing for TDMR channel employed iterative decoding with Low Density Parity Check (LDPC) codes, coupled with 2D equalisers and full 2D Maximum Likelihood (ML) detectors. The shortcoming of these algorithms is their computation complexity especially with regards to the ML detectors which is exponential with respect to the number of bits involved. Therefore, robust low-complexity coding, equalisation and detection algorithms are crucial for successful future deployment of the TDMR scheme.
This present work is aimed at finding efficient and low-complexity coding, equalisation, detection and decoding techniques for improving the performance of TDMR channel and magnetic recording channel in general. A forward error correction (FEC) scheme of two concatenated single parity bit systems along track separated by an interleaver has been presented for channel with perpendicular magnetic recording (PMR) media. Joint detection decoding algorithm using constrained MAP detector for simultaneous detection and decoding of data with single parity bit system has been proposed. It is shown that using the proposed FEC scheme with the constrained MAP detector/decoder can achieve a gain of up to 3dB over un-coded MAP decoder for 1D interference channel. A further gain of 1.5 dB was achieved by concatenating two interleavers with extra parity bit when data density along track is high. The use of single bit parity code as a run length limited code as well as an error correction code is demonstrated to simplify detection complexity and improve system performance.
A low-complexity 2D detection technique for TDMR system with Shingled Magnetic Recording Media (SMR) was also proposed. The technique used the concatenation of 2D MAP detector along track with regular MAP detector across tracks to reduce the complexity order of using full 2D detection from exponential to linear. It is shown that using this technique can improve track density with limited complexity. Two methods of FEC for TDMR channel using two single parity bit systems have been discussed. One using two concatenated single parity bits along track only, separated by a Dithered Relative Prime (DRP) interleaver and the other use the single parity bits in both directions without the DRP interleaver. Consequent to the FEC coding on the channel, a 2D multi-track MAP joint detector decoder has been proposed for simultaneous detection and decoding of the coded single parity bit data. A gain of up to 5dB was achieved using the FEC scheme with the 2D multi-track MAP joint detector decoder over un-coded 2D multi-track MAP detector in TDMR channel. In a situation with high density in both directions, it is shown that FEC coding using two concatenated single parity bits along track separated by DRP interleaver performed better than when the single parity bits are used in both directions without the DRP interleaver.9mobile Nigeri
Efficient Constrained Codes That Enable Page Separation in Modern Flash Memories
The pivotal storage density win achieved by solid-state devices over magnetic
devices recently is a result of multiple innovations in physics, architecture,
and signal processing. Constrained coding is used in Flash devices to increase
reliability via mitigating inter-cell interference. Recently,
capacity-achieving constrained codes were introduced to serve that purpose.
While these codes result in minimal redundancy, they result in non-negligible
complexity increase and access speed limitation since pages cannot be read
separately. In this paper, we suggest new constrained coding schemes that have
low-complexity and preserve the desirable high access speed in modern Flash
devices. The idea is to eliminate error-prone patterns by coding data either
only on the left-most page (binary coding) or only on the two left-most pages
(-ary coding) while leaving data on all the remaining pages uncoded. Our
coding schemes are systematic and capacity-approaching. We refer to the
proposed schemes as read-and-run (RR) constrained coding schemes. The -ary
RR coding scheme is introduced to limit the rate loss. We analyze the new RR
coding schemes and discuss their impact on the probability of occurrence of
different charge levels. We also demonstrate the performance improvement
achieved via RR coding on a practical triple-level cell Flash device.Comment: 30 pages (single column), 5 figures, submitted to the IEEE
Transactions on Communications (TCOM). arXiv admin note: substantial text
overlap with arXiv:2111.0741
Eliminating Media Noise While Preserving Storage Capacity: Reconfigurable Constrained Codes for Two-Dimensional Magnetic Recording
Magnetic recording devices are still competitive in the storage density race
with solid-state devices thanks to new technologies such as two-dimensional
magnetic recording (TDMR). Advanced data processing schemes are needed to
guarantee reliability in TDMR. Data patterns where a bit is surrounded by
complementary bits at the four positions with Manhattan distance on the
TDMR grid are called plus isolation (PIS) patterns, and they are error-prone.
Recently, we introduced lexicographically-ordered constrained (LOCO) codes,
namely optimal plus LOCO (OP-LOCO) codes, that prevent these patterns from
being written in a TDMR device. However, in the high-density regime or the
low-energy regime, additional error-prone patterns emerge, specifically data
patterns where a bit is surrounded by complementary bits at only three
positions with Manhattan distance , and we call them incomplete plus
isolation (IPIS) patterns. In this paper, we present capacity-achieving codes
that forbid both PIS and IPIS patterns in TDMR systems with wide read heads. We
collectively call the PIS and IPIS patterns rotated T isolation (RTIS)
patterns, and we call the new codes optimal T LOCO (OT-LOCO) codes. We analyze
OT-LOCO codes and present their simple encoding-decoding rule that allows
reconfigurability. We also present a novel bridging idea for these codes to
further increase the rate. Our simulation results demonstrate that OT-LOCO
codes are capable of eliminating media noise effects entirely at practical TD
densities with high rates. To further preserve the storage capacity, we suggest
using OP-LOCO codes early in the device lifetime, then employing the
reconfiguration property to switch to OT-LOCO codes later. While the point of
reconfiguration on the density/energy axis is decided manually at the moment,
the next step is to use machine learning to take that decision based on the
TDMR device status.Comment: 15 pages (double column), 11 figures, submitted to the IEEE
Transactions on Magnetics (TMAG). arXiv admin note: text overlap with
arXiv:2010.1068
CROSSTALK-RESILIANT CODING FOR HIGH DENSITY DIGITAL RECORDING
Increasing the track density in magnetic systems is very difficult due to inter-track interference
(ITI) caused by the magnetic field of adjacent tracks. This work presents a
two-track partial response class 4 magnetic channel with linear and symmetrical ITI; and
explores modulation codes, signal processing methods and error correction codes in order
to mitigate the effects of ITI.
Recording codes were investigated, and a new class of two-dimensional run-length
limited recording codes is described. The new class of codes controls the type of ITI
and has been found to be about 10% more resilient to ITI compared to conventional
run-length limited codes. A new adaptive trellis has also been described that adaptively
solves for the effect of ITI. This has been found to give gains up to 5dB in signal to noise
ratio (SNR) at 40% ITI. It was also found that the new class of codes were about 10%
more resilient to ITI compared to conventional recording codes when decoded with the
new trellis.
Error correction coding methods were applied, and the use of Low Density Parity
Check (LDPC) codes was investigated. It was found that at high SNR, conventional
codes could perform as well as the new modulation codes in a combined modulation and
error correction coding scheme. Results suggest that high rate LDPC codes can mitigate
the effect of ITI, however the decoders have convergence problems beyond 30% ITI
- …