12,025 research outputs found

    Codes for Asymmetric Limited-Magnitude Errors With Application to Multilevel Flash Memories

    Get PDF
    Several physical effects that limit the reliability and performance of multilevel flash memories induce errors that have low magnitudes and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions and bounds for such channels when the number of errors is bounded by t and the error magnitudes are bounded by ℓ. The constructions utilize known codes for symmetric errors, over small alphabets, to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. Moreover, the size of the codes is shown to exceed the sizes of known codes (for related error models), and asymptotic rate-optimality results are proved. Extensions of the construction are proposed to accommodate variations on the error model and to include systematic codes as a benefit to practical implementation

    Tables of subspace codes

    Get PDF
    One of the main problems of subspace coding asks for the maximum possible cardinality of a subspace code with minimum distance at least dd over Fqn\mathbb{F}_q^n, where the dimensions of the codewords, which are vector spaces, are contained in K{0,1,,n}K\subseteq\{0,1,\dots,n\}. In the special case of K={k}K=\{k\} one speaks of constant dimension codes. Since this (still) emerging field is very prosperous on the one hand side and there are a lot of connections to classical objects from Galois geometry it is a bit difficult to keep or to obtain an overview about the current state of knowledge. To this end we have implemented an on-line database of the (at least to us) known results at \url{subspacecodes.uni-bayreuth.de}. The aim of this recurrently updated technical report is to provide a user guide how this technical tool can be used in research projects and to describe the so far implemented theoretic and algorithmic knowledge.Comment: 44 pages, 6 tables, 7 screenshot

    Non-asymptotic Upper Bounds for Deletion Correcting Codes

    Full text link
    Explicit non-asymptotic upper bounds on the sizes of multiple-deletion correcting codes are presented. In particular, the largest single-deletion correcting code for qq-ary alphabet and string length nn is shown to be of size at most qnq(q1)(n1)\frac{q^n-q}{(q-1)(n-1)}. An improved bound on the asymptotic rate function is obtained as a corollary. Upper bounds are also derived on sizes of codes for a constrained source that does not necessarily comprise of all strings of a particular length, and this idea is demonstrated by application to sets of run-length limited strings. The problem of finding the largest deletion correcting code is modeled as a matching problem on a hypergraph. This problem is formulated as an integer linear program. The upper bound is obtained by the construction of a feasible point for the dual of the linear programming relaxation of this integer linear program. The non-asymptotic bounds derived imply the known asymptotic bounds of Levenshtein and Tenengolts and improve on known non-asymptotic bounds. Numerical results support the conjecture that in the binary case, the Varshamov-Tenengolts codes are the largest single-deletion correcting codes.Comment: 18 pages, 4 figure

    The Error-Pattern-Correcting Turbo Equalizer

    Full text link
    The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.Comment: This work has been submitted to the special issue of the IEEE Transactions on Information Theory titled: "Facets of Coding Theory: from Algorithms to Networks". This work was supported in part by the NSF Theoretical Foundation Grant 0728676

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
    corecore