88 research outputs found

    Minimum distance of error correcting codes versus encoding complexity, symmetry, and pseudorandomness

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (leaves 207-214).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.We study the minimum distance of binary error correcting codes from the following perspectives: * The problem of deriving bounds on the minimum distance of a code given constraints on the computational complexity of its encoder. * The minimum distance of linear codes that are symmetric in the sense of being invariant under the action of a group on the bits of the codewords. * The derandomization capabilities of probability measures on the Hamming cube based on binary linear codes with good distance properties, and their variations. Highlights of our results include: * A general theorem that asserts that if the encoder uses linear time and sub-linear memory in the general binary branching program model, then the minimum distance of the code cannot grow linearly with the block length when the rate is nonvanishing. * New upper bounds on the minimum distance of various types of Turbo-like codes. * The first ensemble of asymptotically good Turbo like codes. We prove that depth-three serially concatenated Turbo codes can be asymptotically good. * The first ensemble of asymptotically good codes that are ideals in the group algebra of a group. We argue that, for infinitely many block lengths, a random ideal in the group algebra of the dihedral group is an asymptotically good rate half code with a high probability. * An explicit rate-half code whose codewords are in one-to-one correspondence with special hyperelliptic curves over a finite field of prime order where the number of zeros of a codeword corresponds to the number of rational points.(cont.) * A sharp O(k-1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight. * An assertion saying that any sufficiently log-wise independent probability measure looks random to all polynomially small read-once DNF formulas. * An elaborate study of the problem of derandomizability of AC₀ by any sufficiently polylog-wise independent probability measure. * An elaborate study of the problem of approximability of high-degree parity functions on binary linear codes by low-degree polynomials with coefficients in fields of odd characteristics.by Louay M.J. Bazzi.Ph.D

    Coding for storage and testing

    Get PDF
    The problem of reconstructing strings from substring information has found many applications due to its importance in genomic data sequencing and DNA- and polymer-based data storage. Motivated by platforms that use chains of binary synthetic polymers as the recording media and read the content via tandem mass spectrometers, we propose new a family of codes that allows for both unique string reconstruction and correction of multiple mass errors. We first consider the paradigm where the masses of substrings of the input string form the evidence set. We consider two approaches: The first approach pertains to asymmetric errors and the error-correction is achieved by introducing redundancy that scales linearly with the number of errors and logarithmically with the length of the string. The proposed construction allows for the string to be uniquely reconstructed based only on its erroneous substring composition multiset. The asymptotic code rate of the scheme is one, and decoding is accomplished via a simplified version of the Backtracking algorithm used for the Turnpike problem. For symmetric errors, we use a polynomial characterization of the mass information and adapt polynomial evaluation code constructions for this setting. In the process, we develop new efficient decoding algorithms for a constant number of composition errors. The second part of this dissertation addresses a practical paradigm that requires reconstructing mixtures of strings based on the union of compositions of their prefixes and suffixes, generated by mass spectrometry devices. We describe new coding methods that allow for unique joint reconstruction of subsets of strings selected from a code and provide upper and lower bounds on the asymptotic rate of the underlying codebooks. Our code constructions combine properties of binary BhB_h and Dyck strings and can be extended to accommodate missing substrings in the pool. In the final chapter of this dissertation, we focus on group testing. We begin with a review of the gold-standard testing protocol for Covid-19, real-time, reverse transcription PCR, and its properties and associated measurement data such as amplification curves that can guide the development of appropriate and accurate adaptive group testing protocols. We then proceed to examine various off-the-shelf group testing methods for Covid-19, and identify their strengths and weaknesses for the application at hand. Finally, we present a collection of new analytical results for adaptive semiquantitative group testing with combinatorial priors, including performance bounds, algorithmic solutions, and noisy testing protocols. The worst-case paradigm extends and improves upon prior work on semiquantitative group testing with and without specialized PCR noise models

    Design of LDPC codes and reliable practical decoders for standard and non-standard channels

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Modern Cryptography Volume 1

    Get PDF
    This open access book systematically explores the statistical characteristics of cryptographic systems, the computational complexity theory of cryptographic algorithms and the mathematical principles behind various encryption and decryption algorithms. The theory stems from technology. Based on Shannon's information theory, this book systematically introduces the information theory, statistical characteristics and computational complexity theory of public key cryptography, focusing on the three main algorithms of public key cryptography, RSA, discrete logarithm and elliptic curve cryptosystem. It aims to indicate what it is and why it is. It systematically simplifies and combs the theory and technology of lattice cryptography, which is the greatest feature of this book. It requires a good knowledge in algebra, number theory and probability statistics for readers to read this book. The senior students majoring in mathematics, compulsory for cryptography and science and engineering postgraduates will find this book helpful. It can also be used as the main reference book for researchers in cryptography and cryptographic engineering areas

    Modern Cryptography Volume 1

    Get PDF
    This open access book systematically explores the statistical characteristics of cryptographic systems, the computational complexity theory of cryptographic algorithms and the mathematical principles behind various encryption and decryption algorithms. The theory stems from technology. Based on Shannon's information theory, this book systematically introduces the information theory, statistical characteristics and computational complexity theory of public key cryptography, focusing on the three main algorithms of public key cryptography, RSA, discrete logarithm and elliptic curve cryptosystem. It aims to indicate what it is and why it is. It systematically simplifies and combs the theory and technology of lattice cryptography, which is the greatest feature of this book. It requires a good knowledge in algebra, number theory and probability statistics for readers to read this book. The senior students majoring in mathematics, compulsory for cryptography and science and engineering postgraduates will find this book helpful. It can also be used as the main reference book for researchers in cryptography and cryptographic engineering areas

    Cryptanalysis of the Fuzzy Vault for Fingerprints: Vulnerabilities and Countermeasures

    Get PDF
    Das Fuzzy Vault ist ein beliebter Ansatz, um die Minutien eines menschlichen Fingerabdrucks in einer Sicherheitsanwendung geschützt zu speichern. In dieser Arbeit werden verschiedene Implementationen des Fuzzy Vault für Fingerabdrücke in verschiedenen Angriffsszenarien untersucht. Unsere Untersuchungen und Analysen bestätigen deutlich, dass die größte Schwäche von Implementationen des Fingerabdruck Fuzzy Vaults seine hohe Anfälligkeit gegen False-Accept Angriffe ist. Als Gegenmaßnahme könnten mehrere Finger oder sogar mehrere biometrische Merkmale eines Menschen gleichzeitig verwendet werden. Allerdings besitzen traditionelle Fuzzy Vault Konstruktionen eine wesentliche Schwäche: den Korrelationsangriff. Es ist bekannt, dass das Runden von Minutien auf ein starres System, diese Schwäche beheben. Ausgehend davon schlagen wir eine Implementation vor. Würden nun Parameter traditioneller Konstruktionen übernommen, so würden wir einen signifikanten Verlust an Verifikations-Leistung hinnehmen müssen. In einem Training wird daher eine gute Parameterkonfiguration neu bestimmt. Um den Authentifizierungsaufwand praktikabel zu machen, verwenden wir einen randomisierten Dekodierer und zeigen, dass die erreichbaren Raten vergleichbar mit den Raten einer traditionellen Konstruktion sind. Wir folgern, dass das Fuzzy Vault ein denkbarer Ansatz bleibt, um die schwierige Aufgabe ein kryptographisch sicheres biometrisches Kryptosystem in Zukunft zu implementieren.The fuzzy fingerprint vault is a popular approach to protect a fingerprint's minutiae as a building block of a security application. In this thesis simulations of several attack scenarios are conducted against implementations of the fuzzy fingerprint vault from the literature. Our investigations clearly confirm that the weakest link in the fuzzy fingerprint vault is its high vulnerability to false-accept attacks. Therefore, multi-finger or even multi-biometric cryptosystems should be conceived. But there remains a risk that cannot be resolved by using more biometric information of an individual if features are protected using a traditional fuzzy vault construction: The correlation attack remains a weakness of such constructions. It is known that quantizing minutiae to a rigid system while filling the whole space with chaff makes correlation obsolete. Based on this approach, we propose an implementation. If parameters were adopted from a traditional fuzzy fingerprint vault implementation, we would experience a significant loss in authentication performance. Therefore, we perform a training to determine reasonable parameters for our implementation. Furthermore, to make authentication practical, the decoding procedure is proposed to be randomized. By running a performance evaluation on a dataset generally used, we find that achieving resistance against the correlation attack does not have to be at the cost of authentication performance. Finally, we conclude that fuzzy vault remains a possible construction for helping in solving the challenging task of implementing a cryptographically secure multi-biometric cryptosystem in future
    corecore