144 research outputs found

    Some new results on majority-logic codes for correction of random errors

    Get PDF
    The main advantages of random error-correcting majority-logic codes and majority-logic decoding in general are well known and two-fold. Firstly, they offer a partial solution to a classical coding theory problem, that of decoder complexity. Secondly, a majority-logic decoder inherently corrects many more random error patterns than the minimum distance of the code implies is possible. The solution to the decoder complexity is only a partial one because there are circumstances under which a majority-logic decoder is too complex and expensive to implement. [Continues.

    A digital system design language

    Get PDF
    Imperial Users onl

    Design and application of variable-to-variable length codes

    Get PDF
    This work addresses the design of minimum redundancy variable-to-variable length (V2V) codes and studies their suitability for using them in the probability interval partitioning entropy (PIPE) coding concept as an alternative to binary arithmetic coding. Several properties and new concepts for V2V codes are discussed and a polynomial-based principle for designing V2V codes is proposed. Various minimum redundancy V2V codes are derived and combined with the PIPE coding concept. Their redundancy is compared to the binary arithmetic coder of the video compression standard H.265/HEVC

    Irregular Variable Length Coding

    Get PDF
    In this thesis, we introduce Irregular Variable Length Coding (IrVLC) and investigate its applications, characteristics and performance in the context of digital multimedia broadcast telecommunications. During IrVLC encoding, the multimedia signal is represented using a sequence of concatenated binary codewords. These are selected from a codebook, comprising a number of codewords, which, in turn, comprise various numbers of bits. However, during IrVLC encoding, the multimedia signal is decomposed into particular fractions, each of which is represented using a different codebook. This is in contrast to regular Variable Length Coding (VLC), in which the entire multimedia signal is encoded using the same codebook. The application of IrVLCs to joint source and channel coding is investigated in the context of a video transmission scheme. Our novel video codec represents the video signal using tessellations of Variable-Dimension Vector Quantisation (VDVQ) tiles. These are selected from a codebook, comprising a number of tiles having various dimensions. The selected tessellation of VDVQ tiles is signalled using a corresponding sequence of concatenated codewords from a Variable Length Error Correction (VLEC) codebook. This VLEC codebook represents a specific joint source and channel coding case of VLCs, which facilitates both compression and error correction. However, during video encoding, only particular combinations of the VDVQ tiles will perfectly tessellate, owing to their various dimensions. As a result, only particular sub-sets of the VDVQ codebook and, hence, of the VLEC codebook may be employed to convey particular fractions of the video signal. Therefore, our novel video codec can be said to employ IrVLCs. The employment of IrVLCs to facilitate Unequal Error Protection (UEP) is also demonstrated. This may be applied when various fractions of the source signal have different error sensitivities, as is typical in audio, speech, image and video signals, for example. Here, different VLEC codebooks having appropriately selected error correction capabilities may be employed to encode the particular fractions of the source signal. This approach may be expected to yield a higher reconstruction quality than equal protection in cases where the various fractions of the source signal have different error sensitivities. Finally, this thesis investigates the application of IrVLCs to near-capacity operation using EXtrinsic Information Transfer (EXIT) chart analysis. Here, a number of component VLEC codebooks having different inverted EXIT functions are employed to encode particular fractions of the source symbol frame. We show that the composite inverted IrVLC EXIT function may be obtained as a weighted average of the inverted component VLC EXIT functions. Additionally, EXIT chart matching is employed to shape the inverted IrVLC EXIT function to match the EXIT function of a serially concatenated inner channel code, creating a narrow but still open EXIT chart tunnel. In this way, iterative decoding convergence to an infinitesimally low probability of error is facilitated at near-capacity channel SNRs

    Hard Mathematical Problems in Cryptography and Coding Theory

    Get PDF
    In this thesis, we are concerned with certain interesting computationally hard problems and the complexities of their associated algorithms. All of these problems share a common feature in that they all arise from, or have applications to, cryptography, or the theory of error correcting codes. Each chapter in the thesis is based on a stand-alone paper which attacks a particular hard problem. The problems and the techniques employed in attacking them are described in detail. The first problem concerns integer factorization: given a positive integer NN. the problem is to find the unique prime factors of NN. This problem, which was historically of only academic interest to number theorists, has in recent decades assumed a central importance in public-key cryptography. We propose a method for factorizing a given integer using a graph-theoretic algorithm employing Binary Decision Diagrams (BDD). The second problem that we consider is related to the classification of certain naturally arising classes of error correcting codes, called self-dual additive codes over the finite field of four elements, GF(4)GF(4). We address the problem of classifying self-dual additive codes, determining their weight enumerators, and computing their minimum distance. There is a natural relation between self-dual additive codes over GF(4)GF(4) and graphs via isotropic systems. Utilizing the properties of the corresponding graphs, and again employing Binary Decision Diagrams (BDD) to compute the weight enumerators, we can obtain a theoretical speed up of the previously developed algorithm for the classification of these codes. The third problem that we investigate deals with one of the central issues in cryptography, which has historical origins in the theory of geometry of numbers, namely the shortest vector problem in lattices. One method which is used both in theory and practice to solve the shortest vector problem is by enumeration algorithms. Lattice enumeration is an exhaustive search whose goal is to find the shortest vector given a lattice basis as input. In our work, we focus on speeding up the lattice enumeration algorithm, and we propose two new ideas to this end. The shortest vector in a lattice can be written as s=v1b1+v2b2++vnbn{\bf s} = v_1{\bf b}_1+v_2{\bf b}_2+\ldots+v_n{\bf b}_n. where viZv_i \in \mathbb{Z} are integer coefficients and bi{\bf b}_i are the lattice basis vectors. We propose an enumeration algorithm, called hybrid enumeration, which is a greedy approach for computing a short interval of possible integer values for the coefficients viv_i of a shortest lattice vector. Second, we provide an algorithm for estimating the signs ++ or - of the coefficients v1,v2,,vnv_1,v_2,\ldots,v_n of a shortest vector s=i=1nvibi{\bf s}=\sum_{i=1}^{n} v_i{\bf b}_i. Both of these algorithms results in a reduction in the number of nodes in the search tree. Finally, the fourth problem that we deal with arises in the arithmetic of the class groups of imaginary quadratic fields. We follow the results of Soleng and Gillibert pertaining to the class numbers of some sequence of imaginary quadratic fields arising in the arithmetic of elliptic and hyperelliptic curves and compute a bound on the effective estimates for the orders of class groups of a family of imaginary quadratic number fields. That is, suppose f(n)f(n) is a sequence of positive numbers tending to infinity. Given any positive real number LL. an effective estimate is to find the smallest positive integer N=N(L)N = N(L) depending on LL such that f(n)>Lf(n) > L for all n>Nn > N. In other words, given a constant M>0M > 0. we find a value NN such that the order of the ideal class InI_n in the ring RnR_n (provided by the homomorphism in Soleng's paper) is greater than MM for any n>Nn>N. In summary, in this thesis we attack some hard problems in computer science arising from arithmetic, geometry of numbers, and coding theory, which have applications in the mathematical foundations of cryptography and error correcting codes

    The Federal Writers\u27 Project\u27s Mirror to America: The Florida Reflection.

    Get PDF
    In the 1930s, the federal writers of the Works Progress Administration (WPA) plowed fertile new ground in American cultural scholarship. The writers were for the most part not professionals but white collar workers who had been dramatically rescued from unemployment by this government relief project. Like other New Deal relief agencies, the Writers\u27 Project had been created more to make work than to do work. But by the end of the program, it had charted a nation and documented American life more comprehensively than any earlier effort had done. In Florida the federal writers produced a state guide, pioneered African-American studies, broke new ground in oral history, and revolutionized the study of folklore. Their records unearthed the variety and texture of cultural life, leaving an incomparable record of localized America. The Florida Federal Writers\u27 Project demonstrates how the program operated on the state level. Its research beyond the state guide demonstrates the contributions these state programs made to American cultural studies. Florida was one of three Southern states which had an active African-American writers\u27 unit. Zora Neale Hurston, the only trained African-American folklorist in the South, worked on the Florida project for a year and a half. This study explores her contribution to the project and documents her contributions to the state\u27s folklore program. The work by the federal writers in Florida was but one part of the massive national inventory that had created in words a giant mirror of the American scene. Yet only a small portion of their work was published and held up for the nation to see

    Towards a deeper understanding of APN functions and related longstanding problems

    Get PDF
    This dissertation is dedicated to the properties, construction and analysis of APN and AB functions. Being cryptographically optimal, these functions lack any general structure or patterns, which makes their study very challenging. Despite intense work since at least the early 90's, many important questions and conjectures in the area remain open. We present several new results, many of which are directly related to important longstanding open problems; we resolve some of these problems, and make significant progress towards the resolution of others. More concretely, our research concerns the following open problems: i) the maximum algebraic degree of an APN function, and the Hamming distance between APN functions (open since 1998); ii) the classification of APN and AB functions up to CCZ-equivalence (an ongoing problem since the introduction of APN functions, and one of the main directions of research in the area); iii) the extension of the APN binomial x3+βx36x^3 + \beta x^{36} over F210F_{2^{10}} into an infinite family (open since 2006); iv) the Walsh spectrum of the Dobbertin function (open since 2001); v) the existence of monomial APN functions CCZ-inequivalent to ones from the known families (open since 2001); vi) the problem of efficiently and reliably testing EA- and CCZ-equivalence (ongoing, and open since the introduction of APN functions). In the course of investigating these problems, we obtain i.a. the following results: 1) a new infinite family of APN quadrinomials (which includes the binomial x3+βx36x^3 + \beta x^{36} over F210F_{2^{10}}); 2) two new invariants, one under EA-equivalence, and one under CCZ-equivalence; 3) an efficient and easily parallelizable algorithm for computationally testing EA-equivalence; 4) an efficiently computable lower bound on the Hamming distance between a given APN function and any other APN function; 5) a classification of all quadratic APN polynomials with binary coefficients over F2nF_{2^n} for n9n \le 9; 6) a construction allowing the CCZ-equivalence class of one monomial APN function to be obtained from that of another; 7) a conjecture giving the exact form of the Walsh spectrum of the Dobbertin power functions; 8) a generalization of an infinite family of APN functions to a family of functions with a two-valued differential spectrum, and an example showing that this Gold-like behavior does not occur for infinite families of quadratic APN functions in general; 9) a new class of functions (the so-called partially APN functions) defined by relaxing the definition of the APN property, and several constructions and non-existence results related to them.Doktorgradsavhandlin

    The Discrete Logarithm Problem in Finite Fields of Small Characteristic

    Get PDF
    Computing discrete logarithms is a long-standing algorithmic problem, whose hardness forms the basis for numerous current public-key cryptosystems. In the case of finite fields of small characteristic, however, there has been tremendous progress recently, by which the complexity of the discrete logarithm problem (DLP) is considerably reduced. This habilitation thesis on the DLP in such fields deals with two principal aspects. On one hand, we develop and investigate novel efficient algorithms for computing discrete logarithms, where the complexity analysis relies on heuristic assumptions. In particular, we show that logarithms of factor base elements can be computed in polynomial time, and we discuss practical impacts of the new methods on the security of pairing-based cryptosystems. While a heuristic running time analysis of algorithms is common practice for concrete security estimations, this approach is insufficient from a mathematical perspective. Therefore, on the other hand, we focus on provable complexity results, for which we modify the algorithms so that any heuristics are avoided and a rigorous analysis becomes possible. We prove that for any prime field there exist infinitely many extension fields in which the DLP can be solved in quasi-polynomial time. Despite the two aspects looking rather independent from each other, it turns out, as illustrated in this thesis, that progress regarding practical algorithms and record computations can lead to advances on the theoretical running time analysis -- and the other way around.Die Berechnung von diskreten Logarithmen ist ein eingehend untersuchtes algorithmisches Problem, dessen Schwierigkeit zahlreiche Anwendungen in der heutigen Public-Key-Kryptographie besitzt. Für endliche Körper kleiner Charakteristik sind jedoch kürzlich erhebliche Fortschritte erzielt worden, welche die Komplexität des diskreten Logarithmusproblems (DLP) in diesem Szenario drastisch reduzieren. Diese Habilitationsschrift erörtert zwei grundsätzliche Aspekte beim DLP in Körpern kleiner Charakteristik. Es werden einerseits neuartige, erheblich effizientere Algorithmen zur Berechnung von diskreten Logarithmen entwickelt und untersucht, wobei die Laufzeitanalyse auf heuristischen Annahmen beruht. Unter anderem wird gezeigt, dass Logarithmen von Elementen der Faktorbasis in polynomieller Zeit berechnet werden können, und welche praktischen Auswirkungen die neuen Verfahren auf die Sicherheit paarungsbasierter Kryptosysteme haben. Während heuristische Laufzeitabschätzungen von Algorithmen für die konkrete Sicherheitsanalyse üblich sind, so erscheint diese Vorgehensweise aus mathematischer Sicht unzulänglich. Der Aspekt der beweisbaren Komplexität für DLP-Algorithmen konzentriert sich deshalb darauf, modifizierte Algorithmen zu entwickeln, die jegliche heuristische Annahme vermeiden und dessen Laufzeit rigoros gezeigt werden kann. Es wird bewiesen, dass für jeden Primkörper unendlich viele Erweiterungskörper existieren, für die das DLP in quasi-polynomieller Zeit gelöst werden kann. Obwohl die beiden Aspekte weitgehend unabhängig voneinander erscheinen mögen, so zeigt sich, wie in dieser Schrift illustriert wird, dass Fortschritte bei praktischen Algorithmen und Rekordberechnungen auch zu Fortentwicklungen bei theoretischen Laufzeitabschätzungen führen -- und umgekehrt
    corecore