2,627 research outputs found

    Efficient Systematic Encoding of Non-binary VT Codes

    Full text link
    Varshamov-Tenengolts (VT) codes are a class of codes which can correct a single deletion or insertion with a linear-time decoder. This paper addresses the problem of efficient encoding of non-binary VT codes, defined over an alphabet of size q>2q >2. We propose a simple linear-time encoding method to systematically map binary message sequences onto VT codewords. The method provides a new lower bound on the size of qq-ary VT codes of length nn.Comment: This paper will appear in the proceedings of ISIT 201

    Deletion codes in the high-noise and high-rate regimes

    Get PDF
    The noise model of deletions poses significant challenges in coding theory, with basic questions like the capacity of the binary deletion channel still being open. In this paper, we study the harder model of worst-case deletions, with a focus on constructing efficiently decodable codes for the two extreme regimes of high-noise and high-rate. Specifically, we construct polynomial-time decodable codes with the following trade-offs (for any eps > 0): (1) Codes that can correct a fraction 1-eps of deletions with rate poly(eps) over an alphabet of size poly(1/eps); (2) Binary codes of rate 1-O~(sqrt(eps)) that can correct a fraction eps of deletions; and (3) Binary codes that can be list decoded from a fraction (1/2-eps) of deletions with rate poly(eps) Our work is the first to achieve the qualitative goals of correcting a deletion fraction approaching 1 over bounded alphabets, and correcting a constant fraction of bit deletions with rate aproaching 1. The above results bring our understanding of deletion code constructions in these regimes to a similar level as worst-case errors

    Error correction for asynchronous communication and probabilistic burst deletion channels

    Get PDF
    Short-range wireless communication with low-power small-size sensors has been broadly applied in many areas such as in environmental observation, and biomedical and health care monitoring. However, such applications require a wireless sensor operating in always-on mode, which increases the power consumption of sensors significantly. Asynchronous communication is an emerging low-power approach for these applications because it provides a larger potential of significant power savings for recording sparse continuous-time signals, a smaller hardware footprint, and a lower circuit complexity compared to Nyquist-based synchronous signal processing. In this dissertation, the classical Nyquist-based synchronous signal sampling is replaced by asynchronous sampling strategies, i.e., sampling via level crossing (LC) sampling and time encoding. Novel forward error correction schemes for sensor communication based on these sampling strategies are proposed, where the dominant errors consist of pulse deletions and insertions, and where encoding is required to take place in an instantaneous fashion. For LC sampling the presented scheme consists of a combination of an outer systematic convolutional code, an embedded inner marker code, and power-efficient frequency-shift keying modulation at the sensor node. Decoding is first obtained via a maximum a-posteriori (MAP) decoder for the inner marker code, which achieves synchronization for the insertion and deletion channel, followed by MAP decoding for the outer convolutional code. By iteratively decoding marker and convolutional codes along with interleaving, a significant reduction in terms of the expected end-to-end distortion between original and reconstructed signals can be obtained compared to non-iterative processing. Besides investigating the rate trade-off between marker and convolutional codes, it is shown that residual redundancy in the asynchronously sampled source signal can be successfully exploited in combination with redundancy only from a marker code. This provides a new low complexity alternative for deletion and insertion error correction compared to using explicit redundancy. For time encoding, only the pulse timing is of relevance at the receiver, and the outer channel code is replaced by a quantizer to represent the relative position of the pulse timing. Numerical simulations show that LC sampling outperforms time encoding in the low to moderate signal-to-noise ratio regime by a large margin. In the second part of this dissertation, a new burst deletion correction scheme tailored to low-latency applications such as high-read/write-speed non-volatile memory is proposed. An exemplary version is given by racetrack memory, where the element of information is stored in a cell, and data reading is performed by many read ports or heads. In order to read the information, multiple cells shift to its closest head in the same direction and at the same speed, which means a block of bits (i.e., a non-binary symbol) are read by multiple heads in parallel during a shift of the cells. If the cells shift more than by one single cell location, it causes consecutive (burst) non-binary symbol deletions. In practical systems, the maximal length of consecutive non-binary deletions is limited. Existing schemes for this scenario leverage non-binary de Bruijn sequences to perfectly locate deletions. In contrast, in this work binary marker patterns in combination with a new soft-decision decoder scheme is proposed. In this scheme, deletions are soft located by assigning a posteriori probabilities for the location of every burst deletion event and are replaced by erasures. Then, the resulting errors are further corrected by an outer channel code. Such a scheme has an advantage over using non-binary de Bruijn sequences that it in general increases the communication rate

    Capacity Bounds and Concatenated Codes Over Segmented Deletion Channels

    Get PDF
    Cataloged from PDF version of article.We develop an information theoretic characterization and a practical coding approach for segmented deletion channels. Compared to channels with independent and identically distributed (i.i.d.) deletions, where each bit is independently deleted with an equal probability, the segmentation assumption imposes certain constraints, i.e., in a block of bits of a certain length, only a limited number of deletions are allowed to occur. This channel model has recently been proposed and motivated by the fact that for practical systems, when a deletion error occurs, it is more likely that the next one will not appear very soon. We first argue that such channels are information stable, hence their channel capacity exists. Then, we introduce several upper and lower bounds with two different methods in an attempt to understand the channel capacity behavior. The first scheme utilizes certain information provided to the transmitter and/or receiver while the second one explores the asymptotic behavior of the bounds when the average bit deletion rate is small. In the second part of the paper, we consider a practical channel coding approach over a segmented deletion channel. Specifically, we utilize outer LDPC codes concatenated with inner marker codes, and develop suitable channel detection algorithms for this scenario. Different maximum-a-posteriori (MAP) based channel synchronization algorithms operating at the bit and symbol levels are introduced, and specific LDPC code designs are explored. Simulation results clearly indicate the advantages of the proposed approach. In particular, for the entire range of deletion probabilities less than unity, our scheme offers a significantly larger transmission rate compared to the other existing solutions in the literature

    Spectrum of Sizes for Perfect Deletion-Correcting Codes

    Full text link
    One peculiarity with deletion-correcting codes is that perfect tt-deletion-correcting codes of the same length over the same alphabet can have different numbers of codewords, because the balls of radius tt with respect to the Levenshte\u{\i}n distance may be of different sizes. There is interest, therefore, in determining all possible sizes of a perfect tt-deletion-correcting code, given the length nn and the alphabet size~qq. In this paper, we determine completely the spectrum of possible sizes for perfect qq-ary 1-deletion-correcting codes of length three for all qq, and perfect qq-ary 2-deletion-correcting codes of length four for almost all qq, leaving only a small finite number of cases in doubt.Comment: 23 page
    corecore