3 research outputs found

    Easily decoded error correcting codes

    Get PDF
    This thesis is concerned with the decoding aspect of linear block error-correcting codes. When, as in most practical situations, the decoder cost is limited an optimum code may be inferior in performance to a longer sub-optimum code' of the same rate. This consideration is a central theme of the thesis. The best methods available for decoding short optimum codes and long B.C.H. codes are discussed, in some cases new decoding algorithms for the codes are introduced. Hashim's "Nested" codes are then analysed. The method of nesting codes which was given by Hashim is shown to be optimum - but it is seen that the codes are less easily decoded than was previously thought. "Conjoined" codes are introduced. It is shown how two codes with identical numbers of information bits may be "conjoined" to give a code with length and minimum distance equal to the sum of the respective parameters of the constituent codes but with the same number of information bits. A very simple decoding algorithm is given for the codes whereby each constituent codeword is decoded and then a decision is made as to the correct decoding. A technique is given for adding more codewords to conjoined codes without unduly increasing the decoder complexity. Lastly, "Array" codes are described. They are formed by making parity checks over carefully chosen patterns of information bits arranged in a two-dimensional array. Various methods are given for choosing suitable patterns. Some of the resulting codes are self-orthogonal and certain of these have parameters close to the optimum for such codes. A method is given for adding more codewords to array codes, derived from a process of augmentation known for product codes

    Scalable and Transparent Proofs over All Large Fields, via Elliptic Curves (ECFFT part II)

    Get PDF
    Concretely efficient interactive oracle proofs (IOPs) are of interest due to their applications to scaling blockchains, their minimal security assumptions, and their potential future-proof resistance to quantum attacks. Scalable IOPs, in which prover time scales quasilinearly with the computation size and verifier time scales poly-logarithmically with it, have been known to exist thus far only over a set of finite fields of negligible density, namely, over FFT-friendly fields that contain a sub-group of size 2k2^k. Our main result is to show that scalable IOPs can be constructed over any sufficiently large finite field, of size that is at least quadratic in the length of computation whose integrity is proved by the IOP. This result has practical applications as well, because it reduces the proving and verification complexity of cryptographic statements that are naturally stated over pre-defined finite fields which are not FFT-friendly . Prior state-of-the-art scalable IOPs relied heavily on arithmetization via univariate polynomials and Reed--Solomon codes over FFT-friendly fields. To prove our main result and extend scalability to all large finite fields, we generalize the prior techniques and use new algebraic geometry codes evaluated on sub-groups of elliptic curves (elliptic curve codes). We also show a new arithmetization scheme that uses the rich and well-understood group structure of elliptic curves to reduce statements of computational integrity to other statements about the proximity of functions evaluated on the elliptic curve to the new family of elliptic curve codes. This paper continues our recent work that used elliptic curves and their subgroups to create FFT-based algorithms for polynomial manipulation over generic finite fields. However, our new IOP constructions force us to use new codes (ones that are not based on polynomials), and this poses a new set of challenges involving the more restricted automorphism group of these codes, and the constraints of Riemann-Roch spaces of strictly positive genus

    CĂłdigos Goppa

    Get PDF
    Whatever the channel through which information is transmitted will be a noisy channel, that is, the information received will not be the same as the one sent. Even in person-to-person communication there are environmental factors that prevent us from hearing each and every syllable of a conversation. To solve this problem, error correction codes appear, which are able to correct as many errors as possible in the transmission from the received word. Another big problem that arises is that of the security of the transmission of information, for this there are many secure encryption techniques, until nowadays, although with the development of quantum computers some of the most commonly used encryption algorithms are vulnerable, so it is necessary to bet on more secure algorithms, such as those based on error correction codes. One of the encryption algorithms based on codes, is the one proposed by McEliece that uses the Goppa codes, because they are very similar to the random codes and also present an efficient decoding algorithm, so this work is the link between the error correction codes and the encryption methods. The title of this work, encompasses a large part of code theory, so we will make a journey from linear codes to Goppa codes, passing by cyclic codes, BCH, Reed-Solomon and evaluation codes among others. In the first chapter, we will deal with linear codes describing all their properties and giving preliminary concepts about code theory, that will serve us for all the work. Also, we will give techniques to find the generating matrix and the parity control matrix of a linear code, which will allow us to encode and decode words with ease into any linear code. At the end of this chapter we will introduce the concept of spheres that will help us determine the number of words that can be corrected by a code and we will also give dimensions for the size of the code, such as the Singelton height. In the second chapter, we will develop the theory of cyclic codes that will serve us in the following chapters. Unlike linear codes, cyclics have an extra structure, they can be seen by an isomorphism, as ideals of Fnq which will allow us to make a more exhaustive study. Finally we will finish the chapter by giving some techniques to obtain the parity control matrix, using the polynomial and the generator matrix of a cyclic code. In the third chapter, we will continue a bit inside the cyclic codes, we will start developing the BCH codes, which unlike the cyclic ones, has a predefined minimum distance, characteristic that makes them very important since we have to determine exactly the minimum distance of a code is a NP problem and, unless P = NP, we cannot determine it in time. polynomial. We will continue with the Reed-Solomon codes that can be seen as a restriction of the BCH and finally we will move on to the evaluation codes and to the GRS (a generalization of the Reed-Solomon codes), which include within them most of the linear codes. Finally, in the last chapter, we will discuss in depth about the Goppa codes. We will start by defining ourselves and introducing its generating matrix, once we have got an efficient coding system, we will move on to decoding. Thanks to Euclid’s extended algorithm, we will develop a decoding algorithm, the Sugiyama algorithm, which will allow us to obtain the polynomials locator and evaluator, that is, to decode the received word. Finally we will talk about heights at the minimum distance (asymptotic and non-asymptotic) and we will show that the Goppa codes reach the Gilbert-Varshamov heights.Universidad de Granada. Facultad de Ciencias. Grado en Matemáticas. Curso académico 2021-202
    corecore