23 research outputs found
Constructions of codes and low-discrepancy sequences using global function fields
Ph.DDOCTOR OF PHILOSOPH
Algebraic Codes For Error Correction In Digital Communication Systems
Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 29.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/899) on 20.12.2016 by CS (TIS).C. Shannon presented theoretical conditions under which communication was possible
error-free in the presence of noise. Subsequently the notion of using error
correcting codes to mitigate the effects of noise in digital transmission was introduced
by R. Hamming. Algebraic codes, codes described using powerful tools from
algebra took to the fore early on in the search for good error correcting codes. Many
classes of algebraic codes now exist and are known to have the best properties of
any known classes of codes. An error correcting code can be described by three of its
most important properties length, dimension and minimum distance. Given codes
with the same length and dimension, one with the largest minimum distance will
provide better error correction. As a result the research focuses on finding improved
codes with better minimum distances than any known codes.
Algebraic geometry codes are obtained from curves. They are a culmination of years
of research into algebraic codes and generalise most known algebraic codes. Additionally
they have exceptional distance properties as their lengths become arbitrarily
large. Algebraic geometry codes are studied in great detail with special attention
given to their construction and decoding. The practical performance of these codes
is evaluated and compared with previously known codes in different communication
channels. Furthermore many new codes that have better minimum distance
to the best known codes with the same length and dimension are presented from
a generalised construction of algebraic geometry codes. Goppa codes are also an
important class of algebraic codes. A construction of binary extended Goppa codes
is generalised to codes with nonbinary alphabets and as a result many new codes
are found. This construction is shown as an efficient way to extend another well
known class of algebraic codes, BCH codes. A generic method of shortening codes
whilst increasing the minimum distance is generalised. An analysis of this method
reveals a close relationship with methods of extending codes. Some new codes from
Goppa codes are found by exploiting this relationship. Finally an extension method
for BCH codes is presented and this method is shown be as good as a well known
method of extension in certain cases
Correlated Pseudorandomness from the Hardness of Quasi-Abelian Decoding
Secure computation often benefits from the use of correlated randomness to
achieve fast, non-cryptographic online protocols. A recent paradigm put forth
by Boyle (CCS 2018, Crypto 2019) showed how pseudorandom
correlation generators (PCG) can be used to generate large amounts of useful
forms of correlated (pseudo)randomness, using minimal interactions followed
solely by local computations, yielding silent secure two-party computation
protocols (protocols where the preprocessing phase requires almost no
communication). An additional property called programmability allows to extend
this to build N-party protocols. However, known constructions for programmable
PCG's can only produce OLE's over large fields, and use rather new splittable
Ring-LPN assumption.
In this work, we overcome both limitations. To this end, we introduce the
quasi-abelian syndrome decoding problem (QA-SD), a family of assumptions which
generalises the well-established quasi-cyclic syndrome decoding assumption.
Building upon QA-SD, we construct new programmable PCG's for OLE's over any
field with . Our analysis also sheds light on the security
of the ring-LPN assumption used in Boyle (Crypto 2020). Using
our new PCG's, we obtain the first efficient N-party silent secure computation
protocols for computing general arithmetic circuit over for any
.Comment: This is a long version of a paper accepted at CRYPTO'2
Codes et courbes modulaires
Lecture notes for a course given at the Algebraic Coding Theory (ACT) summer school 2022DoctoralThese lecture notes have been written for a course at the Algebraic Coding Theory (ACT) summer school 2022 that took place in the university of Zurich. The objective of the course propose an in-depth presentation of the proof of one of the most striking results of coding theory: Tsfasman Vl\u{a}du\c{t} Zink Theorem, which asserts that for some prime power , there exist sequences of codes over whose asymptotic parameters beat random codes
Secure Arithmetic Computation with Constant Computational Overhead
We study the complexity of securely evaluating an arithmetic circuit over a finite field in the setting of secure two-party computation with semi-honest adversaries. In all existing protocols, the number of arithmetic operations per
multiplication gate grows either linearly with or polylogarithmically with the security parameter. We present the first protocol that only makes a *constant* (amortized) number of field operations per gate. The protocol uses the underlying field as a black box, and its security is based on arithmetic analogues of well-studied cryptographic assumptions.
Our protocol is particularly appealing in the special case of securely evaluating a ``vector-OLE\u27\u27 function of the form , where is the input of one party and are the inputs of the other party. In this case, which is motivated by natural applications, our protocol can achieve an asymptotic rate of (i.e., the communication is dominated by sending roughly elements of ). Our implementation of this protocol suggests that it outperforms competing approaches even for relatively small fields and over fast networks.
Our technical approach employs two new ingredients that may be of independent interest. First, we present a general way to combine any linear code that has a fast encoder and a cryptographic (``LPN-style\u27\u27) pseudorandomness property with another linear code that supports fast encoding and *erasure-decoding*, obtaining a code that inherits both the pseudorandomness feature of the former code and the efficiency features of the latter code. Second, we employ local *arithmetic* pseudo-random generators, proposing arithmetic generalizations of boolean candidates that resist all known attacks
Recommended from our members
Spectral methods and computational trade-offs in high-dimensional statistical inference
Spectral methods have become increasingly popular in designing fast algorithms for modern highdimensional datasets. This thesis looks at several problems in which spectral methods play a central role. In some cases, we also show that such procedures have essentially the best performance among all randomised polynomial time algorithms by exhibiting statistical and computational trade-offs in those problems. In the first chapter, we prove a useful variant of the well-known Davis{Kahan theorem, which is a spectral perturbation result that allows us to bound of the distance between population eigenspaces and their sample versions. We then propose a semi-definite programming algorithm for the sparse principal component analysis (PCA) problem, and analyse its theoretical performance using the perturbation bounds we derived earlier. It turns out that the parameter regime in which our estimator is consistent is strictly smaller than the consistency regime of a minimax optimal (yet computationally intractable) estimator. We show through reduction from a well-known hard problem in computational complexity theory that the difference in consistency regimes is unavoidable for any randomised polynomial time estimator, hence revealing subtle statistical and computational trade-offs in this problem. Such computational trade-offs also exist in the problem of restricted isometry certification. Certifiers for restricted isometry properties can be used to construct design matrices for sparse linear regression problems. Similar to the sparse PCA problem, we show that there is also an intrinsic gap between the class of matrices certifiable using unrestricted algorithms and using polynomial time algorithms. Finally, we consider the problem of high-dimensional changepoint estimation, where we estimate the time of change in the mean of a high-dimensional time series with piecewise constant mean structure. Motivated by real world applications, we assume that changes only occur in a sparse subset of all coordinates. We apply a variant of the semi-definite programming algorithm in sparse PCA to aggregate the signals across different coordinates in a near optimal way so as to estimate the changepoint location as accurately as possible. Our statistical procedure shows superior performance compared to existing methods in this problem.St John's College and Cambridge Overseas Trus
Spectral methods and computational trade-offs in high-dimensional statistical inference
Spectral methods have become increasingly popular in designing fast algorithms for modern highdimensional datasets. This thesis looks at several problems in which spectral methods play a central role. In some cases, we also show that such procedures have essentially the best performance among all randomised polynomial time algorithms by exhibiting statistical and computational trade-offs in those problems. In the first chapter, we prove a useful variant of the well-known Davis{Kahan theorem, which is a spectral perturbation result that allows us to bound of the distance between population eigenspaces and their sample versions. We then propose a semi-definite programming algorithm for the sparse principal component analysis (PCA) problem, and analyse its theoretical performance using the perturbation bounds we derived earlier. It turns out that the parameter regime in which our estimator is consistent is strictly smaller than the consistency regime of a minimax optimal (yet computationally intractable) estimator. We show through reduction from a well-known hard problem in computational complexity theory that the difference in consistency regimes is unavoidable for any randomised polynomial time estimator, hence revealing subtle statistical and computational trade-offs in this problem. Such computational trade-offs also exist in the problem of restricted isometry certification. Certifiers for restricted isometry properties can be used to construct design matrices for sparse linear regression problems. Similar to the sparse PCA problem, we show that there is also an intrinsic gap between the class of matrices certifiable using unrestricted algorithms and using polynomial time algorithms. Finally, we consider the problem of high-dimensional changepoint estimation, where we estimate the time of change in the mean of a high-dimensional time series with piecewise constant mean structure. Motivated by real world applications, we assume that changes only occur in a sparse subset of all coordinates. We apply a variant of the semi-definite programming algorithm in sparse PCA to aggregate the signals across different coordinates in a near optimal way so as to estimate the changepoint location as accurately as possible. Our statistical procedure shows superior performance compared to existing methods in this problem.St John's College and Cambridge Overseas Trus