44 research outputs found
Adding RLL Properties to Four CCSDS LDPC Codes Without Increasing Their Redundancy
This paper presents the construction of Run Length Limited (RLL) Error Control Codes (ECCs) from four Low Density Parity Check (LDPC) Codes specified by Consultative Committee for Space Data Systems (CCSDS). The obtained RLL-ECCs present a practical alternative to the CCSDS codes with pseudo-randomizers. Their advantage is that the maximal runlengths of equal symbols in their codeword sequences are guaranteed, which is not the case if the common approach with pseudo-randomizers is used. The other advantages are that no additional redundancy is introduced into encoded codewords and that the encoding and decoding procedures of the original error control CCSDS codes do not have to be modified in the following cases. In the first case if hard decoding is used and the transmission channel can be modeled as a Binary Symmetric Channel (BSC) or in the second case if soft decoding and coherent Binary Phase Shift Keying (BPSK) modulation is used and the appropriate transmission channel model is an Additive White Gaussian Noise (AWGN) channel
Coding against synchronisation and related errors
In this thesis, we study aspects of coding against synchronisation errors, such as deletions and replications, and related errors. Synchronisation errors are a source of fundamental open problems in information theory, because they introduce correlations between output symbols even when input symbols are independently distributed. We focus on random errors, and consider two complementary problems:
We study the optimal rate of reliable information transmission through channels with synchronisation and related errors (the channel capacity). Unlike simpler error models, the capacity of such channels is unknown. We first consider the geometric sticky channel, which replicates input bits according to a geometric distribution. Previously, bounds on its capacity were known only via numerical methods, which do not aid our conceptual understanding of this quantity. We derive sharp analytical capacity upper bounds which approach, and sometimes surpass, numerical bounds. This opens the door to a mathematical treatment of its capacity. We consider also the geometric deletion channel, combining deletions and geometric replications. We derive analytical capacity upper bounds, and notably prove that the capacity is bounded away from the maximum when the deletion probability is small, meaning that this channel behaves differently than related well-studied channels in this regime. Finally, we adapt techniques developed to handle synchronisation errors to derive improved upper bounds and structural results on the capacity of the discrete-time Poisson channel, a model of optical communication.
Motivated by portable DNA-based storage and trace reconstruction, we introduce and study the coded trace reconstruction problem, where the goal is to design efficiently encodable high-rate codes whose codewords can be efficiently reconstructed from few reads corrupted by deletions. Remarkably, we design such n-bit codes with rate 1-O(1/log n) that require exponentially fewer reads than average-case trace reconstruction algorithms.Open Acces
Protecting the Future of Information: LOCO Coding With Error Detection for DNA Data Storage
DNA strands serve as a storage medium for -ary data over the alphabet
. DNA data storage promises formidable information density,
long-term durability, and ease of replicability. However, information in this
intriguing storage technology might be corrupted. Experiments have revealed
that DNA sequences with long homopolymers and/or with low -content are
notably more subject to errors upon storage.
This paper investigates the utilization of the recently-introduced method for
designing lexicographically-ordered constrained (LOCO) codes in DNA data
storage. This paper introduces DNA LOCO (D-LOCO) codes, over the alphabet
with limited runs of identical symbols. These codes come with an
encoding-decoding rule we derive, which provides affordable encoding-decoding
algorithms. In terms of storage overhead, the proposed encoding-decoding
algorithms outperform those in the existing literature. Our algorithms are
readily reconfigurable. D-LOCO codes are intrinsically balanced, which allows
us to achieve balancing over the entire DNA strand with minimal rate penalty.
Moreover, we propose four schemes to bridge consecutive codewords, three of
which guarantee single substitution error detection per codeword. We examine
the probability of undetecting errors. We also show that D-LOCO codes are
capacity-achieving and that they offer remarkably high rates at moderate
lengths.Comment: 14 pages (double column), 3 figures, submitted to the IEEE
Transactions on Molecular, Biological and Multi-scale Communications (TMBMC
CROSSTALK-RESILIANT CODING FOR HIGH DENSITY DIGITAL RECORDING
Increasing the track density in magnetic systems is very difficult due to inter-track interference
(ITI) caused by the magnetic field of adjacent tracks. This work presents a
two-track partial response class 4 magnetic channel with linear and symmetrical ITI; and
explores modulation codes, signal processing methods and error correction codes in order
to mitigate the effects of ITI.
Recording codes were investigated, and a new class of two-dimensional run-length
limited recording codes is described. The new class of codes controls the type of ITI
and has been found to be about 10% more resilient to ITI compared to conventional
run-length limited codes. A new adaptive trellis has also been described that adaptively
solves for the effect of ITI. This has been found to give gains up to 5dB in signal to noise
ratio (SNR) at 40% ITI. It was also found that the new class of codes were about 10%
more resilient to ITI compared to conventional recording codes when decoded with the
new trellis.
Error correction coding methods were applied, and the use of Low Density Parity
Check (LDPC) codes was investigated. It was found that at high SNR, conventional
codes could perform as well as the new modulation codes in a combined modulation and
error correction coding scheme. Results suggest that high rate LDPC codes can mitigate
the effect of ITI, however the decoders have convergence problems beyond 30% ITI