1,209 research outputs found

    Cryptanalytic Applications of the Polynomial Method for Solving Multivariate Equation Systems over GF(2)

    Get PDF
    At SODA 2017 Lokshtanov et al. presented the first worst-case algorithms with exponential speedup over exhaustive search for solving polynomial equation systems of degree dd in nn variables over finite fields. These algorithms were based on the polynomial method in circuit complexity which is a technique for proving circuit lower bounds that has recently been applied in algorithm design. Subsequent works further improved the asymptotic complexity of polynomial method-based algorithms for solving equations over the field F2\mathbb{F}_2. However, the asymptotic complexity formulas of these algorithms hide significant low-order terms, and hence they outperform exhaustive search only for very large values of nn. In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over F2\mathbb{F}_2. We analyze our algorithm\u27s performance for solving random equation systems, and bound its complexity by about n2⋅20.815nn^2 \cdot 2^{0.815n} bit operations for d=2d = 2 and n2⋅2(1−1/2.7d)nn^2 \cdot 2^{\left(1 - 1/2.7d\right) n} for any d≥2d \geq 2. We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST\u27s post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function. Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems

    New Planar P-time Computable Six-Vertex Models and a Complete Complexity Classification

    Full text link
    We discover new P-time computable six-vertex models on planar graphs beyond Kasteleyn's algorithm for counting planar perfect matchings. We further prove that there are no more: Together, they exhaust all P-time computable six-vertex models on planar graphs, assuming #P is not P. This leads to the following exact complexity classification: For every parameter setting in C{\mathbb C} for the six-vertex model, the partition function is either (1) computable in P-time for every graph, or (2) #P-hard for general graphs but computable in P-time for planar graphs, or (3) #P-hard even for planar graphs. The classification has an explicit criterion. The new P-time cases in (2) provably cannot be subsumed by Kasteleyn's algorithm. They are obtained by a non-local connection to #CSP, defined in terms of a "loop space". This is the first substantive advance toward a planar Holant classification with not necessarily symmetric constraints. We introduce M\"obius transformation on C{\mathbb C} as a powerful new tool in hardness proofs for counting problems.Comment: 61 pages, 16 figures. An extended abstract appears in SODA 202

    CNF Satisfiability in a Subspace and Related Problems

    Get PDF
    We introduce the problem of finding a satisfying assignment to a CNF formula that must further belong to a prescribed input subspace. Equivalent formulations of the problem include finding a point outside a union of subspaces (the Union-of-Subspace Avoidance (USA) problem), and finding a common zero of a system of polynomials over ?? each of which is a product of affine forms. We focus on the case of k-CNF formulas (the k-Sub-Sat problem). Clearly, k-Sub-Sat is no easier than k-SAT, and might be harder. Indeed, via simple reductions we show that 2-Sub-Sat is NP-hard, and W[1]-hard when parameterized by the co-dimension of the subspace. We also prove that the optimization version Max-2-Sub-Sat is NP-hard to approximate better than the trivial 3/4 ratio even on satisfiable instances. On the algorithmic front, we investigate fast exponential algorithms which give non-trivial savings over brute-force algorithms. We give a simple branching algorithm with running time (1.5)^r for 2-Sub-Sat, where r is the subspace dimension, as well as an O^*(1.4312)? time algorithm where n is the number of variables. Turning to k-Sub-Sat for k ? 3, while known algorithms for solving a system of degree k polynomial equations already imply a solution with running time ? 2^{r(1-1/2k)}, we explore a more combinatorial approach. Based on an analysis of critical variables (a key notion underlying the randomized k-SAT algorithm of Paturi, Pudlak, and Zane), we give an algorithm with running time ? {n choose {?t}} 2^{n-n/k} where n is the number of variables and t is the co-dimension of the subspace. This improves upon the running time of the polynomial equations approach for small co-dimension. Our combinatorial approach also achieves polynomial space in contrast to the algebraic approach that uses exponential space. We also give a PPZ-style algorithm for k-Sub-Sat with running time ? 2^{n-n/2k}. This algorithm is in fact oblivious to the structure of the subspace, and extends when the subspace-membership constraint is replaced by any constraint for which partial satisfying assignments can be efficiently completed to a full satisfying assignment. Finally, for systems of O(n) polynomial equations in n variables over ??, we give a fast exponential algorithm when each polynomial has bounded degree irreducible factors (but can otherwise have large degree) using a degree reduction trick

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes

    Algebraic Attacks on RAIN and AIM Using Equivalent Representations

    Get PDF
    Designing novel symmetric-key primitives for advanced protocols like secure multiparty computation (MPC), fully homomorphic encryption (FHE) and zero-knowledge proof systems (ZK), has been an important research topic in recent years. Many such existing primitives adopt quite different design strategies from conventional block ciphers. Notable features include that many of these ciphers are defined over a large finite field, and that a power map is commonly used to construct the nonlinear component due to its efficiency in these applications as well as its strong resistance against the differential and linear cryptanalysis. In this paper, we target the MPC-friendly ciphers AIM and RAIN used for the post-quantum signature schemes AIMer (CCS 2023 and NIST PQC Round 1 Additional Signatures) and Rainier (CCS 2022), respectively. Specifically, we can find equivalent representations of 2-round RAIN and full-round AIM, respectively, which make them vulnerable to either the polynomial method, or the crossbred algorithm, or the fast exhaustive search attack. Consequently, we can break 2-round RAIN with the 128/192/256-bit key in only 2111/2170/2225 bit operations. For full-round AIM with the 128/192/256-bit key, we could break them in 2136.2/2200.7/2265 bit operations, which are equivalent to about 2115/2178/2241 calls of the underlying primitives. In particular, our analysis indicates that AIM does not reach the required security levels by the NIST competition.</p

    A STUDY OF LINEAR ERROR CORRECTING CODES

    Get PDF
    Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes
    • …
    corecore