159 research outputs found
Inverting Cryptographic Hash Functions via Cube-and-Conquer
MD4 and MD5 are seminal cryptographic hash functions proposed in early 1990s.
MD4 consists of 48 steps and produces a 128-bit hash given a message of
arbitrary finite size. MD5 is a more secure 64-step extension of MD4. Both MD4
and MD5 are vulnerable to practical collision attacks, yet it is still not
realistic to invert them, i.e. to find a message given a hash. In 2007, the
39-step version of MD4 was inverted via reducing to SAT and applying a CDCL
solver along with the so-called Dobbertin's constraints. As for MD5, in 2012
its 28-step version was inverted via a CDCL solver for one specified hash
without adding any additional constraints. In this study, Cube-and-Conquer (a
combination of CDCL and lookahead) is applied to invert step-reduced versions
of MD4 and MD5. For this purpose, two algorithms are proposed. The first one
generates inversion problems for MD4 by gradually modifying the Dobbertin's
constraints. The second algorithm tries the cubing phase of Cube-and-Conquer
with different cutoff thresholds to find the one with minimal runtime
estimation of the conquer phase. This algorithm operates in two modes: (i)
estimating the hardness of a given propositional Boolean formula; (ii)
incomplete SAT-solving of a given satisfiable propositional Boolean formula.
While the first algorithm is focused on inverting step-reduced MD4, the second
one is not area-specific and so is applicable to a variety of classes of hard
SAT instances. In this study, 40-, 41-, 42-, and 43-step MD4 are inverted for
the first time via the first algorithm and the estimating mode of the second
algorithm. 28-step MD5 is inverted for four hashes via the incomplete
SAT-solving mode of the second algorithm. For three hashes out of them this is
done for the first time.Comment: 40 pages, 11 figures. A revised submission to JAI
Data Structures Meet Cryptography: 3SUM with Preprocessing
This paper shows several connections between data structure problems and
cryptography against preprocessing attacks. Our results span data structure
upper bounds, cryptographic applications, and data structure lower bounds, as
summarized next.
First, we apply Fiat--Naor inversion, a technique with cryptographic origins,
to obtain a data structure upper bound. In particular, our technique yields a
suite of algorithms with space and (online) time for a preprocessing
version of the -input 3SUM problem where .
This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is
no data structure that solves this problem for and for any constant .
Secondly, we show equivalence between lower bounds for a broad class of
(static) data structure problems and one-way functions in the random oracle
model that resist a very strong form of preprocessing attack. Concretely, given
a random function (accessed as an oracle) we show how to
compile it into a function which resists -bit
preprocessing attacks that run in query time where
(assuming a corresponding data structure lower bound
on 3SUM). In contrast, a classical result of Hellman tells us that itself
can be more easily inverted, say with -bit preprocessing in
time. We also show that much stronger lower bounds follow from the hardness of
kSUM. Our results can be equivalently interpreted as security against
adversaries that are very non-uniform, or have large auxiliary input, or as
security in the face of a powerfully backdoored random oracle.
Thirdly, we give non-adaptive lower bounds for 3SUM and a range of geometric
problems which match the best known lower bounds for static data structure
problems
An Atypical Survey of Typical-Case Heuristic Algorithms
Heuristic approaches often do so well that they seem to pretty much always
give the right answer. How close can heuristic algorithms get to always giving
the right answer, without inducing seismic complexity-theoretic consequences?
This article first discusses how a series of results by Berman, Buhrman,
Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the
early 1970s through the early 1990s, explicitly or implicitly limited how well
heuristic algorithms can do on NP-hard problems. In particular, many desirable
levels of heuristic success cannot be obtained unless severe, highly unlikely
complexity class collapses occur. Second, we survey work initiated by Goldreich
and Wigderson, who showed how under plausible assumptions deterministic
heuristics for randomized computation can achieve a very high frequency of
correctness. Finally, we consider formal ways in which theory can help explain
the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012
issue of SIGACT New
On weak rotors, Latin squares, linear algebraic representations, invariant differentials and cryptanalysis of Enigma
Since the 1920s until today it was assumed that rotors in Enigma cipher machines do not have a particular weakness or structure. A curious situation compared to hundreds of papers about S-boxes and weak setup in block ciphers. In this paper we reflect on what is normal and what is not normal for a cipher machine rotor, with a reference point being a truly random permutation. Our research shows that most original wartime Enigma rotors ever made are not at all random permutations and conceal strong differential properties invariant by rotor rotation. We also exhibit linear/algebraic properties pertaining to the ring of integers modulo 26. Some rotors are imitating a certain construction of a perfect quasigroup which however only works when N is odd. Most other rotors are simply trying to approximate the ideal situation. To the best of our knowledge these facts are new and were not studied before 2020
Evaluating the Hardness of SAT Instances Using Evolutionary Optimization Algorithms
Propositional satisfiability (SAT) solvers are deemed to be among the most efficient reasoners, which have been successfully used in a wide range of practical applications. As this contrasts the well-known NP-completeness of SAT, a number of attempts have been made in the recent past to assess the hardness of propositional formulas in conjunctive normal form (CNF). The present paper proposes a CNF formula hardness measure which is close in conceptual meaning to the one based on Backdoor set notion: in both cases some subset B of variables in a CNF formula is used to define the hardness of the formula w.r.t. this set. In contrast to the backdoor measure, the new measure does not demand the polynomial decidability of CNF formulas obtained when substituting assignments of variables from B to the original formula. To estimate this measure the paper suggests an adaptive (?,?)-approximation probabilistic algorithm. The problem of looking for the subset of variables which provides the minimal hardness value is reduced to optimization of a pseudo-Boolean black-box function. We apply evolutionary algorithms to this problem and demonstrate applicability of proposed notions and techniques to tests from several families of unsatisfiable CNF formulas
- …