1,229 research outputs found

    Codes for Asymmetric Limited-Magnitude Errors With Application to Multilevel Flash Memories

    Get PDF
    Several physical effects that limit the reliability and performance of multilevel flash memories induce errors that have low magnitudes and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions and bounds for such channels when the number of errors is bounded by t and the error magnitudes are bounded by ℓ. The constructions utilize known codes for symmetric errors, over small alphabets, to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. Moreover, the size of the codes is shown to exceed the sizes of known codes (for related error models), and asymptotic rate-optimality results are proved. Extensions of the construction are proposed to accommodate variations on the error model and to include systematic codes as a benefit to practical implementation

    The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision

    Full text link
    We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval.Comment: ICLR 2019 (Oral). Project page: http://nscl.csail.mit.edu

    On the decoder error probability for Reed-Solomon codes

    Get PDF
    Upper bounds On the decoder error probability for Reed-Solomon codes are derived. By definition, "decoder error" occurs when the decoder finds a codeword other than the transitted codeword; this is in contrast to "decoder failure," which occurs when the decoder fails to find any codeword at all. These results imply, for example, that for a t error-correcting Reed-Solomon code of length q - 1 over GF(q), if more than t errors occur, the probability of decoder error is less than 1/t!

    On the decode error probability for Reed-Solomon codes

    Get PDF
    Upper bounds on the decoder error probability for Reed-Solomon codes are derived. By definition, decoder error occurs when the decoder finds a codeword other than the transmitted codeword; this is in contrast to decoder failure, which occurs when the decoder fails to find any codeword at all. The results imply, for example, that for a t error correcting Reed-Solomon code of length q - 1 over GF(q), if more than t errors occur, the probability of decoder error is less than 1/t! In particular, for the Voyager Reed-Solomon code, the probability of decoder error given a word error is smaller than 3 x 10 to the minus 14th power. Thus, in a typical operating region with probability 100,000 of word error, the probability of undetected word error is about 10 to the minus 14th power

    Coding for Errors and Erasures in Random Network Coding

    Get PDF
    The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space VV and the collection by the receiver of a basis for a vector space UU. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space V∩UV \cap U is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.Comment: This revised paper contains some minor changes and clarification

    Constructions and Noise Threshold of Hyperbolic Surface Codes

    Full text link
    We show how to obtain concrete constructions of homological quantum codes based on tilings of 2D surfaces with constant negative curvature (hyperbolic surfaces). This construction results in two-dimensional quantum codes whose tradeoff of encoding rate versus protection is more favorable than for the surface code. These surface codes would require variable length connections between qubits, as determined by the hyperbolic geometry. We provide numerical estimates of the value of the noise threshold and logical error probability of these codes against independent X or Z noise, assuming noise-free error correction
    • 

    corecore