9 research outputs found

    Reducing the Number of Homogeneous Linear Equations in Finding Annihilators

    Get PDF
    Given a Boolean function ff on nn-variables, we find a reduced set of homogeneous linear equations by solving which one can decide whether there exist annihilators at degree dd or not. Using our method the size of the associated matrix becomes νf×(i=0d(ni)μf)\nu_f \times (\sum_{i=0}^{d} \binom{n}{i} - \mu_f), where, νf={xwt(x)>d,f(x)=1}\nu_f = |\{x | wt(x) > d, f(x) = 1\}| and μf={xwt(x)d,f(x)=1}\mu_f = |\{x | wt(x) \leq d, f(x) = 1\}| and the time required to construct the matrix is same as the size of the matrix. This is a preprocessing step before the exact solution strategy (to decide on the existence of the annihilators) that requires to solve the set of homogeneous linear equations (basically to calculate the rank) and this can be improved when the number of variables and the number of equations are minimized. As the linear transformation on the input variables of the Boolean function keeps the degree of the annihilators invariant, our preprocessing step can be more efficiently applied if one can find an affine transformation over f(x)f(x) to get h(x)=f(Bx+b)h(x) = f(Bx+b) such that μh={xh(x)=1,wt(x)d}\mu_h = |\{x | h(x) = 1, wt(x) \leq d\}| is maximized (and in turn νh\nu_h is minimized too). We present an efficient heuristic towards this. Our study also shows for what kind of Boolean functions the asymptotic reduction in the size of the matrix is possible and when the reduction is not asymptotic but constant

    Efficient probabilistic algorithm for estimating the algebraic properties of Boolean functions for large nn

    Get PDF
    Although several methods for estimating the resistance of a random Boolean function against (fast) algebraic attacks were proposed, these methods are usually infeasible in practice for relative large input variables nn (for instance n30)n\geq 30) due to increased computational complexity. An efficient estimation the resistance of Boolean function (with relative large input variables nn) against (fast) algebraic attacks appears to be a rather difficult task. In this paper, the concept of partial linear relations decomposition is introduced, which decomposes any given nonlinear Boolean function into many linear (affine) subfunctions by using the disjoint sets of input variables. Based on this result, a general probabilistic decomposition algorithm for nonlinear Boolean functions is presented which gives a new framework for estimating the resistance of Boolean function against (fast) algebraic attacks. It is shown that our new probabilistic method gives very tight estimates (lower and upper bound) and it only requires about O(n22n)O(n^22^n) operations for a random Boolean function with nn variables, thus having much less time complexity than previously known algorithms

    D.STVL.7 - Algebraic cryptanalysis of symmetric primitives

    Get PDF
    The recent development of algebraic attacks can be considered an important breakthrough in the analysis of symmetric primitives; these are powerful techniques that apply to both block and stream ciphers (and potentially hash functions). The basic principle of these techniques goes back to Shannon's work: they consist in expressing the whole cryptographic algorithm as a large system of multivariate algebraic equations (typically over F2), which can be solved to recover the secret key. Efficient algorithms for solving such algebraic systems are therefore the essential ingredients of algebraic attacks. Algebraic cryptanalysis against symmetric primitives has recently received much attention from the cryptographic community, particularly after it was proposed against some LFSR- based stream ciphers and against the AES and Serpent block ciphers. This is currently a very active area of research. In this report we discuss the basic principles of algebraic cryptanalysis of stream ciphers and block ciphers, and review the latest developments in the field. We give an overview of the construction of such attacks against both types of primitives, and recall the main algorithms for solving algebraic systems. Finally we discuss future research directions

    On the Algebraic Immunity - Resiliency trade-off, implications for Goldreich\u27s Pseudorandom Generator

    Get PDF
    Goldreich\u27s pseudorandom generator is a well-known building block for many theoretical cryptographic constructions from multi-party computation to indistinguishability obfuscation. Its unique efficiency comes from the use of random local functions: each bit of the output is computed by applying some fixed public nn-variable Boolean function ff to a random public size-nn tuple of distinct input bits. The characteristics that a Boolean function ff must have to ensure pseudorandomness is a puzzling issue. It has been studied in several works and particularly by Applebaum and Lovett (STOC 2016) who showed that resiliency and algebraic immunity are key parameters in this purpose. In this paper, we propose the first study on Boolean functions that reach together maximal algebraic immunity and high resiliency. 1) We assess the possible consequences of the asymptotic existence of such optimal functions. We show how they allow to build functions reaching all possible algebraic immunity-resiliency trade-offs (respecting the algebraic immunity and Siegenthaler bounds). We provide a new bound on the minimal number of variables~nn, and thus on the minimal locality, necessary to ensure a secure Goldreich\u27s pseudorandom generator. Our results come with a granularity level depending on the strength of our assumptions, from none to the conjectured asymptotic existence of optimal functions. 2) We extensively analyze the possible existence and the properties of such optimal functions. Our results show two different trends. On the one hand, we were able to show some impossibility results concerning existing families of Boolean functions that are known to be optimal with respect to their algebraic immunity, starting by the promising XOR-MAJ functions. We show that they do not reach optimality and could be beaten by optimal functions if our conjecture is verified. On the other hand, we prove the existence of optimal functions in low number of variables by experimentally exhibiting some of them up to 1212 variables. This directly provides better candidates for Goldreich\u27s pseudorandom generator than the existing XOR-MAJ candidates for polynomial stretches from 22 to 66

    D.STVL.9 - Ongoing Research Areas in Symmetric Cryptography

    Get PDF
    This report gives a brief summary of some of the research trends in symmetric cryptography at the time of writing (2008). The following aspects of symmetric cryptography are investigated in this report: • the status of work with regards to different types of symmetric algorithms, including block ciphers, stream ciphers, hash functions and MAC algorithms (Section 1); • the algebraic attacks on symmetric primitives (Section 2); • the design criteria for symmetric ciphers (Section 3); • the provable properties of symmetric primitives (Section 4); • the major industrial needs in the area of symmetric cryptography (Section 5)

    Capacity-Achieving Coding Mechanisms: Spatial Coupling and Group Symmetries

    Get PDF
    The broad theme of this work is in constructing optimal transmission mechanisms for a wide variety of communication systems. In particular, this dissertation provides a proof of threshold saturation for spatially-coupled codes, low-complexity capacity-achieving coding schemes for side-information problems, a proof that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels, and a mathematical framework to design delay sensitive communication systems. Spatially-coupled codes are a class of codes on graphs that are shown to achieve capacity universally over binary symmetric memoryless channels (BMS) under belief-propagation decoder. The underlying phenomenon behind spatial coupling, known as “threshold saturation via spatial coupling”, turns out to be general and this technique has been applied to a wide variety of systems. In this work, a proof of the threshold saturation phenomenon is provided for irregular low-density parity-check (LDPC) and low-density generator-matrix (LDGM) ensembles on BMS channels. This proof is far simpler than published alternative proofs and it remains as the only technique to handle irregular and LDGM codes. Also, low-complexity capacity-achieving codes are constructed for three coding problems via spatial coupling: 1) rate distortion with side-information, 2) channel coding with side-information, and 3) write-once memory system. All these schemes are based on spatially coupling compound LDGM/LDPC ensembles. Reed-Muller and Bose-Chaudhuri-Hocquengham (BCH) are well-known algebraic codes introduced more than 50 years ago. While these codes are studied extensively in the literature it wasn’t known whether these codes achieve capacity. This work introduces a technique to show that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels under maximum a posteriori (MAP) decoding. Instead of relying on the weight enumerators or other precise details of these codes, this technique requires that these codes have highly symmetric permutation groups. In fact, any sequence of linear codes with increasing blocklengths whose rates converge to a number between 0 and 1, and whose permutation groups are doubly transitive achieve capacity on erasure channels under bit-MAP decoding. This pro-vides a rare example in information theory where symmetry alone is sufficient to achieve capacity. While the channel capacity provides a useful benchmark for practical design, communication systems of the day also demand small latency and other link layer metrics. Such delay sensitive communication systems are studied in this work, where a mathematical framework is developed to provide insights into the optimal design of these systems

    Computing the Algebraic Immunity Efficiently

    No full text
    corecore