10 research outputs found

    Ternary Syndrome Decoding with Large Weight

    Get PDF
    The Syndrome Decoding problem is at the core of many code-based cryptosystems. In this paper, we study ternary Syndrome Decoding in large weight. This problem has been introduced in the Wave signature scheme but has never been thoroughly studied. We perform an algorithmic study of this problem which results in an update of the Wave parameters. On a more fundamental level, we show that ternary Syndrome Decoding with large weight is a really harder problem than the binary Syndrome Decoding problem, which could have several applications for the design of code-based cryptosystems

    On Iterative Collision Search for LPN and Subset Sum

    Get PDF
    Iterative collision search procedures play a key role in developing combinatorial algorithms for the subset sum and learning parity with noise (LPN) problems. In both scenarios, the single-list pair-wise iterative collision search finds the most solutions and offers the best efficiency. However, due to its complex probabilistic structure, no rigorous analysis for it appears to be available to the best of our knowledge. As a result, theoretical works often resort to overly constrained and sub-optimal iterative collision search variants in exchange for analytic simplicity. In this paper, we present rigorous analysis for the single-list pair-wise iterative collision search method and its applications in subset sum and LPN. In the LPN literature, the method is known as the LF2 heuristic. Besides LF2, we also present rigorous analysis of other LPN solving heuristics and show that they work well when combined with LF2. Putting it together, we significantly narrow the gap between theoretical and heuristic algorithms for LPN

    The Approximate k-List Problem

    Get PDF
    We study a generalization of the k-list problem, also known as the Generalized Birthday problem. In the k-list problem, one starts with k lists of binary vectors and has to find a set of vectors – one from each list – that sum to the all-zero target vector. In our generalized Approximate k-list problem, one has to find a set of vectors that sum to a vector of small Hamming weight ω. Thus, we relax the condition on the target vector and allow for some error positions. This in turn helps us to significantly reduce the size of the starting lists, which determines the memory consumption, and the running time as a function of ω. For ω = 0, our algorithm achieves the original k-list run-time/memory consumption, whereas for ω = n/2 it has polynomial complexity. As in the k-list case, our Approximate k-list algorithm is defined for all k = 2m,m > 1. Surprisingly, we also find an Approximate 3-list algorithm that improves in the runtime exponent compared to its 2-list counterpart for all 0 < ω < n/2. To the best of our knowledge this is the first such improvement of some variant of the notoriously hard 3-list problem. As an application of our algorithm we compute small weight multiples of a given polynomial with more flexible degree than with Wagner’s algorithm from Crypto 2002 and with smaller time/memory consumption than with Minder and Sinclair’s algorithm from SODA 2009

    Subset-optimized BLS Multi-signature with Key Aggregation

    Get PDF
    We propose a variant of the original Boneh, Drijvers, and Neven (Asiacrypt \u2718) BLS multi-signature aggregation scheme best suited to applications where the full set of potential signers is fixed and known and any subset II of this group can create a multi-signature over a message mm. This setup is very common in proof-of-stake blockchains where a 2f+12f+1 majority of 3f3f validators sign transactions and/or blocks and is secure against rogue-key\textit{rogue-key} attacks without requiring a proof of key possession mechanism. In our scheme, instead of randomizing the aggregated signatures, we have a one-time randomization phase of the public keys: each public key is replaced by a sticky randomized version (for which each participant can still compute the derived private key). The main benefit compared to the original Boneh at al. approach is that since our randomization process happens only once and not per signature we can have significant savings during aggregation and verification. Specifically, for a subset II of tt signers, we save tt exponentiations in G2\mathbb{G}_2 at aggregation and tt exponentiations in G1\mathbb{G}_1 at verification or vice versa, depending on which BLS mode we prefer: minPK\textit{minPK} (public keys in G1\mathbb{G}_1) or minSig\textit{minSig} (signatures in G1\mathbb{G}_1). Interestingly, our security proof requires a significant departure from the co-CDH based proof of Boneh at al. When nn (size of the universal set of signers) is small, we prove our protocol secure in the Algebraic Group and Random Oracle models based on the hardness of the Discrete Log problem. For larger nn, our proof also requires the Random Modular Subset Sum (RMSS) problem

    Securing Update Propagation with Homomorphic Hashing

    Get PDF
    In database replication, ensuring consistency when propagating updates is a challenging and extensively studied problem. However, the problem of securing update propagation against malicious adversaries has received less attention in the literature. This consideration becomes especially relevant when sending updates across a large network of untrusted peers. In this paper we formalize the problem of secure update propagation and propose a system that allows a centralized distributor to propagate signed updates across a network while adding minimal overhead to each transaction. We show that our system is secure (in the random oracle model) against an attacker who can maliciously modify any update and its signature. Our approach relies on the use of a cryptographic primitive known as homomorphic hashing, introduced by Bellare, Goldreich, and Goldwasser. We make our study of secure update propagation concrete with an instantiation of the lattice-based homomorphic hash LtHash of Bellare and Miccancio. We provide a detailed security analysis of the collision resistance of LtHash, and we implement Lthash using a selection of parameters that gives at least 200 bits of security. Our implementation has been deployed to secure update propagation in production at Facebook, and is included in the Folly open-source library

    Relational Hash

    Get PDF
    Traditional cryptographic hash functions allow one to easily check whether the original plaintexts are equal or not, given a pair of hash values. Probabilistic hash functions extend this concept where given a probabilistic hash of a value and the value itself, one can efficiently check whether the hash corresponds to the given value. However, given distinct probabilistic hashes of the same value it is not possible to check whether they correspond to the same value. In this work we introduce a new cryptographic primitive called \emph{Relational Hash} using which, given a pair of (relational) hash values, one can determine whether the original plaintexts were related or not. We formalize various natural security notions for the Relational Hash primitive - one-wayness, twin one-wayness, unforgeability and oracle simulatibility. We develop a Relational Hash scheme for discovering linear relations among bit-vectors (elements of \FF_2^n) and \FF_p-vectors. Using the linear Relational Hash schemes we develop Relational Hashes for detecting proximity in terms of hamming distance. The proximity Relational Hashing schemes can be adapted to a privacy preserving biometric identification scheme, as well as a privacy preserving biometric authentication scheme secure against passive adversaries

    Improved Algorithms for the Approximate k-List Problem in Euclidean Norm

    Get PDF
    We present an algorithm for the approximate kk-List problem for the Euclidean distance that improves upon the Bai-Laarhoven-Stehle (BLS) algorithm from ANTS\u2716. The improvement stems from the observation that almost all the solutions to the approximate kk-List problem form a particular configuration in nn-dimensional space. Due to special properties of configurations, it is much easier to verify whether a kk-tuple forms a configuration rather than checking whether it gives a solution to the kk-List problem. Thus, phrasing the kk-List problem as a problem of finding such configurations immediately gives a better algorithm. Furthermore, the search for configurations can be sped up using techniques from Locality-Sensitive Hashing (LSH). Stated in terms of configuration-search, our LSH-like algorithm offers a broader picture on previous LSH algorithms. For the Shortest Vector Problem, our configuration-search algorithm results in an exponential improvement for memory-efficient sieving algorithms. For k=3k=3, it allows us to bring down the complexity of the BLS sieve algorithm on an nn-dimensional lattice from 20.4812n+o(n)2^{0.4812n+o(n)} to 20.3962n+o(n)2^{0.3962n + o(n)} with the same space-requirement 20.1887n+o(n)2^{0.1887n + o(n)}. Note that our algorithm beats the Gauss Sieve algorithm with time resp. space requirements of 20.415n+o(n)2^{0.415n+o(n)} resp. 20.208n+o(n)2^{0.208n + o(n)}, while being easy to implement. Using LSH techniques, we can further reduce the time complexity down to 20.3717n+o(n)2^{0.3717n + o(n)} while retaining a memory complexity of 20.1887n+o(n)2^{0.1887n+o(n)}

    Cryptographic Hash Functions in Groups and Provable Properties

    Get PDF
    We consider several "provably secure" hash functions that compute simple sums in a well chosen group (G,*). Security properties of such functions provably translate in a natural way to computational problems in G that are simple to define and possibly also hard to solve. Given k disjoint lists Li of group elements, the k-sum problem asks for gi &#8714; Li such that g1 * g2 *...* gk = 1G. Hardness of the problem in the respective groups follows from some "standard" assumptions used in public-key cryptology such as hardness of integer factoring, discrete logarithms, lattice reduction and syndrome decoding. We point out evidence that the k-sum problem may even be harder than the above problems. Two hash functions based on the group k-sum problem, SWIFFTX and FSB, were submitted to NIST as candidates for the future SHA-3 standard. Both submissions were supported by some sort of a security proof. We show that the assessment of security levels provided in the proposals is not related to the proofs included. The main claims on security are supported exclusively by considerations about available attacks. By introducing "second-order" bounds on bounds on security, we expose the limits of such an approach to provable security. A problem with the way security is quantified does not necessarily mean a problem with security itself. Although FSB does have a history of failures, recent versions of the two above functions have resisted cryptanalytic efforts well. This evidence, as well as the several connections to more standard problems, suggests that the k-sum problem in some groups may be considered hard on its own, and possibly lead to provable bounds on security. Complexity of the non-trivial tree algorithm is becoming a standard tool for measuring the associated hardness. We propose modifications to the multiplicative Very Smooth Hash and derive security from multiplicative k-sums in contrast to the original reductions that related to factoring or discrete logarithms. Although the original reductions remain valid, we measure security in a new, more aggressive way. This allows us to relax the parameters and hash faster. We obtain a function that is only three times slower compared to SHA-256 and is estimated to offer at least equivalent collision resistance. The speed can be doubled by the use of a special modulus, such a modified function is supported exclusively by the hardness of multiplicative k-sums modulo a power of two. Our efforts culminate in a new multiplicative k-sum function in finite fields that further generalizes the design of Very Smooth Hash. In contrast to the previous variants, the memory requirements of the new function are negligible. The fastest instance of the function expected to offer 128-bit collision resistance runs at 24 cycles per byte on an Intel Core i7 processor and approaches the 17.4 figure of SHA-256. The new functions proposed in this thesis do not provably achieve a usual security property such as preimage or collision resistance from a well-established assumption. They do however enjoy unconditional provable separation of inputs that collide. Changes in input that are small with respect to a well defined measure never lead to identical output in the compression function

    Electronic Colloquium on Computational Complexity, Report No. 7 (2005) On Random High Density Subset Sums

    No full text
    In the Subset Sum problem, we are given n integers a1,..., an and a target number t, and are asked to find the subset of the ai’s such that the sum is t. A version of the subset sum problem is the Random Modular Subset Sum problem. In this version, the ai’s are generated randomly in the range [0, M), and we are asked to produce a subset of them such that the sum is t(mod M). The hardness of RMSS depends on the relationship between the parameters M and n. When M = 2O(n2), RMSS can be solved in polynomial time by a reduction to the shortest vector problem. When M = 2O(log n) , the problem can be solved in polynomial time by dynamic programming, and recently an algorithm was proposed that solves the problem in polynomial time for M = 2O(log2 n). In this work, we present an algorithm that solves the Random Modular Subset Sum problem for parameter M = 2nɛ for nɛ O( ɛ &lt; 1 in time (and space) 2 log n). As far as we know, this is the first algorithm that performs in time better than 2Ω(nɛ) for arbitrary ɛ &lt; 1.
    corecore