50,045 research outputs found

    Non-asymptotic Upper Bounds for Deletion Correcting Codes

    Full text link
    Explicit non-asymptotic upper bounds on the sizes of multiple-deletion correcting codes are presented. In particular, the largest single-deletion correcting code for qq-ary alphabet and string length nn is shown to be of size at most qn−q(q−1)(n−1)\frac{q^n-q}{(q-1)(n-1)}. An improved bound on the asymptotic rate function is obtained as a corollary. Upper bounds are also derived on sizes of codes for a constrained source that does not necessarily comprise of all strings of a particular length, and this idea is demonstrated by application to sets of run-length limited strings. The problem of finding the largest deletion correcting code is modeled as a matching problem on a hypergraph. This problem is formulated as an integer linear program. The upper bound is obtained by the construction of a feasible point for the dual of the linear programming relaxation of this integer linear program. The non-asymptotic bounds derived imply the known asymptotic bounds of Levenshtein and Tenengolts and improve on known non-asymptotic bounds. Numerical results support the conjecture that in the binary case, the Varshamov-Tenengolts codes are the largest single-deletion correcting codes.Comment: 18 pages, 4 figure

    Codes for Asymmetric Limited-Magnitude Errors With Application to Multilevel Flash Memories

    Get PDF
    Several physical effects that limit the reliability and performance of multilevel flash memories induce errors that have low magnitudes and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions and bounds for such channels when the number of errors is bounded by t and the error magnitudes are bounded by ℓ. The constructions utilize known codes for symmetric errors, over small alphabets, to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. Moreover, the size of the codes is shown to exceed the sizes of known codes (for related error models), and asymptotic rate-optimality results are proved. Extensions of the construction are proposed to accommodate variations on the error model and to include systematic codes as a benefit to practical implementation

    Systematic Error-Correcting Codes for Rank Modulation

    Get PDF
    The rank-modulation scheme has been recently proposed for efficiently storing data in nonvolatile memories. Error-correcting codes are essential for rank modulation, however, existing results have been limited. In this work we explore a new approach, \emph{systematic error-correcting codes for rank modulation}. Systematic codes have the benefits of enabling efficient information retrieval and potentially supporting more efficient encoding and decoding procedures. We study systematic codes for rank modulation under Kendall's τ\tau-metric as well as under the ℓ∞\ell_\infty-metric. In Kendall's τ\tau-metric we present [k+2,k,3][k+2,k,3]-systematic codes for correcting one error, which have optimal rates, unless systematic perfect codes exist. We also study the design of multi-error-correcting codes, and provide two explicit constructions, one resulting in [n+1,k+1,2t+2][n+1,k+1,2t+2] systematic codes with redundancy at most 2t+12t+1. We use non-constructive arguments to show the existence of [n,k,n−k][n,k,n-k]-systematic codes for general parameters. Furthermore, we prove that for rank modulation, systematic codes achieve the same capacity as general error-correcting codes. Finally, in the ℓ∞\ell_\infty-metric we construct two [n,k,d][n,k,d] systematic multi-error-correcting codes, the first for the case of d=O(1)d=O(1), and the second for d=Θ(n)d=\Theta(n). In the latter case, the codes have the same asymptotic rate as the best codes currently known in this metric

    Synchronization Strings: Explicit Constructions, Local Decoding, and Applications

    Full text link
    This paper gives new results for synchronization strings, a powerful combinatorial object that allows to efficiently deal with insertions and deletions in various communication settings: ∙\bullet We give a deterministic, linear time synchronization string construction, improving over an O(n5)O(n^5) time randomized construction. Independently of this work, a deterministic O(nlog⁥2log⁥n)O(n\log^2\log n) time construction was just put on arXiv by Cheng, Li, and Wu. We also give a deterministic linear time construction of an infinite synchronization string, which was not known to be computable before. Both constructions are highly explicit, i.e., the ithi^{th} symbol can be computed in O(log⁥i)O(\log i) time. ∙\bullet This paper also introduces a generalized notion we call long-distance synchronization strings that allow for local and very fast decoding. In particular, only O(log⁥3n)O(\log^3 n) time and access to logarithmically many symbols is required to decode any index. We give several applications for these results: ∙\bullet For any ÎŽ0\delta0 we provide an insdel correcting code with rate 1−Ύ−ϔ1-\delta-\epsilon which can correct any O(ÎŽ)O(\delta) fraction of insdel errors in O(nlog⁥3n)O(n\log^3n) time. This near linear computational efficiency is surprising given that we do not even know how to compute the (edit) distance between the decoding input and output in sub-quadratic time. We show that such codes can not only efficiently recover from ÎŽ\delta fraction of insdel errors but, similar to [Schulman, Zuckerman; TransInf'99], also from any O(ÎŽ/log⁥n)O(\delta/\log n) fraction of block transpositions and replications. ∙\bullet We show that highly explicitness and local decoding allow for infinite channel simulations with exponentially smaller memory and decoding time requirements. These simulations can be used to give the first near linear time interactive coding scheme for insdel errors

    Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound

    Full text link
    We introduce synchronization strings as a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to deal with than commonly considered half-errors, i.e., symbol corruptions and erasures. For every Ï”>0\epsilon >0, synchronization strings allow to index a sequence with an ϔ−O(1)\epsilon^{-O(1)} size alphabet such that one can efficiently transform kk synchronization errors into (1+Ï”)k(1+\epsilon)k half-errors. This powerful new technique has many applications. In this paper, we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels. While ECCs for both half-errors and synchronization errors have been intensely studied, the later has largely resisted progress. Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed by Schulman and Zuckerman. Insdel codes for asymptotically large or small noise rates were given in 2016 by Guruswami et al. but these codes are still polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A direct application of our synchronization strings based indexing method gives a simple black-box construction which transforms any ECC into an equally efficient insdel code with a slightly larger alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs over large constant alphabets into the realm of insdel codes. Most notably, we obtain efficient insdel codes which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound for the complete noise spectrum

    Correcting Charge-Constrained Errors in the Rank-Modulation Scheme

    Get PDF
    We investigate error-correcting codes for a the rank-modulation scheme with an application to flash memory devices. In this scheme, a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The resulting scheme eliminates the need for discrete cell levels, overcomes overshoot errors when programming cells (a serious problem that reduces the writing speed), and mitigates the problem of asymmetric errors. In this paper, we study the properties of error-correcting codes for charge-constrained errors in the rank-modulation scheme. In this error model the number of errors corresponds to the minimal number of adjacent transpositions required to change a given stored permutation to another erroneous one—a distance measure known as Kendall’s τ-distance.We show bounds on the size of such codes, and use metric-embedding techniques to give constructions which translate a wealth of knowledge of codes in the Lee metric to codes over permutations in Kendall’s τ-metric. Specifically, the one-error-correcting codes we construct are at least half the ball-packing upper bound

    Constructions of Rank Modulation Codes

    Full text link
    Rank modulation is a way of encoding information to correct errors in flash memory devices as well as impulse noise in transmission lines. Modeling rank modulation involves construction of packings of the space of permutations equipped with the Kendall tau distance. We present several general constructions of codes in permutations that cover a broad range of code parameters. In particular, we show a number of ways in which conventional error-correcting codes can be modified to correct errors in the Kendall space. Codes that we construct afford simple encoding and decoding algorithms of essentially the same complexity as required to correct errors in the Hamming metric. For instance, from binary BCH codes we obtain codes correcting tt Kendall errors in nn memory cells that support the order of n!/(log⁥2n!)tn!/(\log_2n!)^t messages, for any constant t=1,2,...t= 1,2,... We also construct families of codes that correct a number of errors that grows with nn at varying rates, from Θ(n)\Theta(n) to Θ(n2)\Theta(n^{2}). One of our constructions gives rise to a family of rank modulation codes for which the trade-off between the number of messages and the number of correctable Kendall errors approaches the optimal scaling rate. Finally, we list a number of possibilities for constructing codes of finite length, and give examples of rank modulation codes with specific parameters.Comment: Submitted to IEEE Transactions on Information Theor
    • 

    corecore