1,345 research outputs found

    Error-correction on non-standard communication channels

    Get PDF
    Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels. First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB. Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels. Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes. Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code

    Exploiting an HMAC-SHA-1 optimization to speed up PBKDF2

    Get PDF
    PBKDF2 is a well-known password-based key derivation function. In order to slow attackers down, PBKDF2 introduces CPU-intensive operations based on an iterated pseudorandom function (in our case HMAC-SHA-1). If we are able to speed up a SHA-1 or an HMAC implementation, we are able to speed up PBKDF2-HMAC-SHA-1. This means that a performance improvement might be exploited by regular users and attackers. Interestingly, FIPS 198-1 suggests that it is possible to precompute first message block of a keyed hash function only once, store such a value and use it each time is needed. Therefore the computation of first message block does not contribute to slowing attackers down, thus making the computation of second message block crucial. In this paper we focus on the latter, investigating the possibility to avoid part of the HMAC-SHA-1 operations. We show that some CPU-intensive operations may be replaced with a set of equivalent, but less onerous, instructions. We identify useless XOR operations exploiting and extending Intel optimizations, and applying the Boyar-Peralta heuristic. In addition, we provide an alternative method to compute the SHA-1 message scheduling function and explain why attackers might exploit these findings to speed up a brute force attack against PBKDF2

    The history of degenerate (bipartite) extremal graph problems

    Full text link
    This paper is a survey on Extremal Graph Theory, primarily focusing on the case when one of the excluded graphs is bipartite. On one hand we give an introduction to this field and also describe many important results, methods, problems, and constructions.Comment: 97 pages, 11 figures, many problems. This is the preliminary version of our survey presented in Erdos 100. In this version 2 only a citation was complete

    Rijndael Circuit Level Cryptanalysis

    Get PDF
    The Rijndael cipher was chosen as the Advanced Encryption Standard (AES) in August 1999. Its internal structure exhibits unusual properties such as a clean and simple algebraic description for the S-box. In this research, we construct a scalable family of ciphers which behave very much like the original Rijndael. This approach gives us the opportunity to use computational complexity theory. In the main result, we generate a candidate one-way function family from the scalable Rijndael family. We note that, although reduction to one-way functions is a common theme in the theory of public-key cryptography, it is rare to have such a defense of security in the private-key theatre. In this thesis a plan of attack is introduced at the circuit level whose aim is not break the cryptosystem in any practical way, but simply to break the very bold Rijndael security claim. To achieve this goal, we are led to a formal understanding of the Rijndael security claim, juxtaposing it with rigorous security treatments. Several of the questions that arise in this regard are as follows: ``Do invertible functions represented by circuits with very small numbers of gates have better than worst case implementations for their inverses?\u27 ``How many plaintext/ciphertext pairs are needed to uniquely determine the Rijndael key?\u2

    Precision analysis for hardware acceleration of numerical algorithms

    No full text
    The precision used in an algorithm affects the error and performance of individual computations, the memory usage, and the potential parallelism for a fixed hardware budget. However, when migrating an algorithm onto hardware, the potential improvements that can be obtained by tuning the precision throughout an algorithm to meet a range or error specification are often overlooked; the major reason is that it is hard to choose a number system which can guarantee any such specification can be met. Instead, the problem is mitigated by opting to use IEEE standard double precision arithmetic so as to be ‘no worse’ than a software implementation. However, the flexibility in the number representation is one of the key factors that can be exploited on reconfigurable hardware such as FPGAs, and hence ignoring this potential significantly limits the performance achievable. In order to optimise the performance of hardware reliably, we require a method that can tractably calculate tight bounds for the error or range of any variable within an algorithm, but currently only a handful of methods to calculate such bounds exist, and these either sacrifice tightness or tractability, whilst simulation-based methods cannot guarantee the given error estimate. This thesis presents a new method to calculate these bounds, taking into account both input ranges and finite precision effects, which we show to be, in general, tighter in comparison to existing methods; this in turn can be used to tune the hardware to the algorithm specifications. We demonstrate the use of this software to optimise hardware for various algorithms to accelerate the solution of a system of linear equations, which forms the basis of many problems in engineering and science, and show that significant performance gains can be obtained by using this new approach in conjunction with more traditional hardware optimisations

    Algorithms for Solving Linear and Polynomial Systems of Equations over Finite Fields with Applications to Cryptanalysis

    Get PDF
    This dissertation contains algorithms for solving linear and polynomial systems of equations over GF(2). The objective is to provide fast and exact tools for algebraic cryptanalysis and other applications. Accordingly, it is divided into two parts. The first part deals with polynomial systems. Chapter 2 contains a successful cryptanalysis of Keeloq, the block cipher used in nearly all luxury automobiles. The attack is more than 16,000 times faster than brute force, but queries 0.62 × 2^32 plaintexts. The polynomial systems of equations arising from that cryptanalysis were solved via SAT-solvers. Therefore, Chapter 3 introduces a new method of solving polynomial systems of equations by converting them into CNF-SAT problems and using a SAT-solver. Finally, Chapter 4 contains a discussion on how SAT-solvers work internally. The second part deals with linear systems over GF(2), and other small fields (and rings). These occur in cryptanalysis when using the XL algorithm, which converts polynomial systems into larger linear systems. We introduce a new complexity model and data structures for GF(2)-matrix operations. This is discussed in Appendix B but applies to all of Part II. Chapter 5 contains an analysis of "the Method of Four Russians" for multiplication and a variant for matrix inversion, which is log n faster than Gaussian Elimination, and can be combined with Strassen-like algorithms. Chapter 6 contains an algorithm for accelerating matrix multiplication over small finite fields. It is feasible but the memory cost is so high that it is mostly of theoretical interest. Appendix A contains some discussion of GF(2)-linear algebra and how it differs from linear algebra in R and C. Appendix C discusses algorithms faster than Strassen's algorithm, and contains proofs that matrix multiplication, matrix squaring, triangular matrix inversion, LUP-factorization, general matrix in- version and the taking of determinants, are equicomplex. These proofs are already known, but are here gathered into one place in the same notation

    Polynomial multiplication over binary finite fields : new upper bounds

    Get PDF
    When implementing a cryptographic algorithm, efficient operations have high relevance both in hardware and in software. Since a number of operations can be performed via polynomial multiplication, the arithmetic of polynomials over finite fields plays a key role in real-life implementations\u2014e.g., accelerating cryptographic and cryptanalytic software (pre- and post-quantum) (Chou in Accelerating pre-and post-quantum cryptography. Ph.D. thesis, Technische Universiteit Eindhoven, 2016). One of the most interesting papers that addressed the problem has been published in 2009. In Bernstein (in: Halevi (ed) Advances in Cryptology\u2014CRYPTO 2009: 29th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 16\u201320, 2009. Proceedings, pp 317\u2013336. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009), Bernstein suggests to split polynomials into parts and presents a new recursive multiplication technique which is faster than those commonly used. In order to further reduce the number of bit operations (Bernstein in High-speed cryptography in characteristic 2: minimum number of bit operations for multiplication, 2009. http://binary.cr.yp.to/m.html) required to multiply n-bit polynomials, researchers adopt different approaches. In CMT: Circuit minimization work. http://www.cs.yale.edu/homes/peralta/CircuitStuff/CMT.html a greedy heuristic has been applied to linear straight-line sequences listed in Bernstein (High-speed cryptography in characteristic 2: minimum number of bit operations for multiplication, 2009. http://binary.cr.yp.to/m.html). In 2013, D\u2019angella et al. (Applied computing conference, 2013. ACC\u201913. WEAS. pp. 31\u201337. WEAS, 2013) skip some redundant operations of the multiplication algorithms described in Bernstein (in: Halevi (ed) Advances in Cryptology\u2014CRYPTO 2009: 29th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 16\u201320, 2009. Proceedings, pp 317\u2013336. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009). In 2015, Cenk et al. (J Cryptogr Eng 5(4):289\u2013303, 2015) suggest new multiplication algorithms. In this paper, (a) we present a \u201ck-1\u201d-level recursion algorithm that can be used to reduce the effective number of bit operations required to multiply n-bit polynomials, and (b) we use algebraic extensions of F 2 combined with Lagrange interpolation to improve the asymptotic complexity

    On the Pseudo-Deterministic Query Complexity of NP Search Problems

    Get PDF
    We study pseudo-deterministic query complexity - randomized query algorithms that are required to output the same answer with high probability on all inputs. We prove Ω(√n) lower bounds on the pseudo-deterministic complexity of a large family of search problems based on unsatisfiable random CNF instances, and also for the promise problem (FIND1) of finding a 1 in a vector populated with at least half one’s. This gives an exponential separation between randomized query complexity and pseudo-deterministic complexity, which is tight in the quantum setting. As applications we partially solve a related combinatorial coloring problem, and we separate random tree-like Resolution from its pseudo-deterministic version. In contrast to our lower bound, we show, surprisingly, that in the zero-error, average case setting, the three notions (deterministic, randomized, pseudo-deterministic) collapse
    corecore