274 research outputs found
Quantum attacks on Bitcoin, and how to protect against them
The key cryptographic protocols used to secure the internet and financial
transactions of today are all susceptible to attack by the development of a
sufficiently large quantum computer. One particular area at risk are
cryptocurrencies, a market currently worth over 150 billion USD. We investigate
the risk of Bitcoin, and other cryptocurrencies, to attacks by quantum
computers. We find that the proof-of-work used by Bitcoin is relatively
resistant to substantial speedup by quantum computers in the next 10 years,
mainly because specialized ASIC miners are extremely fast compared to the
estimated clock speed of near-term quantum computers. On the other hand, the
elliptic curve signature scheme used by Bitcoin is much more at risk, and could
be completely broken by a quantum computer as early as 2027, by the most
optimistic estimates. We analyze an alternative proof-of-work called Momentum,
based on finding collisions in a hash function, that is even more resistant to
speedup by a quantum computer. We also review the available post-quantum
signature schemes to see which one would best meet the security and efficiency
requirements of blockchain applications.Comment: 21 pages, 6 figures. For a rough update on the progress of Quantum
devices and prognostications on time from now to break Digital signatures,
see https://www.quantumcryptopocalypse.com/quantum-moores-law
Attacks on quantum key distribution protocols that employ non-ITS authentication
We demonstrate how adversaries with unbounded computing resources can break
Quantum Key Distribution (QKD) protocols which employ a particular message
authentication code suggested previously. This authentication code, featuring
low key consumption, is not Information-Theoretically Secure (ITS) since for
each message the eavesdropper has intercepted she is able to send a different
message from a set of messages that she can calculate by finding collisions of
a cryptographic hash function. However, when this authentication code was
introduced it was shown to prevent straightforward Man-In-The-Middle (MITM)
attacks against QKD protocols.
In this paper, we prove that the set of messages that collide with any given
message under this authentication code contains with high probability a message
that has small Hamming distance to any other given message. Based on this fact
we present extended MITM attacks against different versions of BB84 QKD
protocols using the addressed authentication code; for three protocols we
describe every single action taken by the adversary. For all protocols the
adversary can obtain complete knowledge of the key, and for most protocols her
success probability in doing so approaches unity.
Since the attacks work against all authentication methods which allow to
calculate colliding messages, the underlying building blocks of the presented
attacks expose the potential pitfalls arising as a consequence of non-ITS
authentication in QKD-postprocessing. We propose countermeasures, increasing
the eavesdroppers demand for computational power, and also prove necessary and
sufficient conditions for upgrading the discussed authentication code to the
ITS level.Comment: 34 page
On Burst Error Correction and Storage Security of Noisy Data
Secure storage of noisy data for authentication purposes usually involves the
use of error correcting codes. We propose a new model scenario involving burst
errors and present for that several constructions.Comment: to be presented at MTNS 201
Adversarially Robust Property-Preserving Hash Functions
Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P(x, y) of the original inputs. The idea of property-preserving hash functions underlies sketching, compressed sensing and locality-sensitive hashing.
Property-preserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P(x, y) is negligible. As observed in many recent works (incl. Mironov, Naor and Segev, STOC 2008; Hardt and Woodruff, STOC 2013; Naor and Yogev, CRYPTO 2015), such a correctness guarantee assumes that the adversary (who produces the offending inputs) has no information about the hash function, and is too weak in many scenarios.
We initiate the study of adversarial robustness for property-preserving hash functions, provide definitions, derive broad lower bounds due to a simple connection with communication complexity, and show the necessity of computational assumptions to construct such functions. Our main positive results are two candidate constructions of property-preserving hash functions (achieving different parameters) for the (promise) gap-Hamming property which checks if x and y are "too far" or "too close". Our first construction relies on generic collision-resistant hash functions, and our second on a variant of the syndrome decoding assumption on low-density parity check codes
On fuzzy syndrome hashing with LDPC coding
The last decades have seen a growing interest in hash functions that allow
some sort of tolerance, e.g. for the purpose of biometric authentication. Among
these, the syndrome fuzzy hashing construction allows to securely store
biometric data and to perform user authentication without the need of sharing
any secret key. This paper analyzes this model, showing that it offers a
suitable protection against information leakage and several advantages with
respect to similar solutions, such as the fuzzy commitment scheme. Furthermore,
the design and characterization of LDPC codes to be used for this purpose is
addressed.Comment: in Proceedings 4th International Symposium on Applied Sciences in
Biomedical and Communication Technologies (ISABEL), ACM 2011. This is the
author's version of the work. It is posted here by permission of ACM for your
personal use. Not for redistributio
Low-Complexity Cryptographic Hash Functions
Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function.
The most common type of such hash functions is collision resistant hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output
Fuzzy Authentication using Rank Distance
Fuzzy authentication allows authentication based on the fuzzy matching of two
objects, for example based on the similarity of two strings in the Hamming
metric, or on the similiarity of two sets in the set difference metric. Aim of
this paper is to show other models and algorithms of secure fuzzy
authentication, which can be performed using the rank metric. A few schemes are
presented which can then be applied in different scenarios and applications.Comment: to appear in Cryptography and Physical Layer Security, Lecture Notes
in Electrical Engineering, Springe
Nearly Optimal Property Preserving Hashing
Property-preserving hashing (PPH) consists of a family of compressing hash functions such that, for any two inputs , we can correctly identify whether some property holds given only the digests . In a basic PPH, correctness should hold with overwhelming probability over the choice of when are worst-case values chosen a-priori and independently of . In an adversarially robust PPH (RPPH), correctness must hold even when are chosen adversarially and adaptively depending on . Here, we study (R)PPH for the property that the Hamming distance between and is at most .
The notion of (R)PPH was introduced by Boyle, LaVigne and Vaikuntanathan (ITCS \u2719), and further studied by Fleischhacker, Simkin (Eurocrypt \u2721) and Fleischhacker, Larsen, Simkin (Eurocrypt \u2722). In this work, we obtain improved constructions that are conceptually simpler, have nearly optimal parameters, and rely on more general assumptions than prior works. Our results are:
* We construct information-theoretic non-robust PPH for Hamming distance via syndrome list-decoding of linear error-correcting codes. We provide a lower bound showing that this construction is essentially optimal.
* We make the above construction robust with little additional overhead, by relying on homomorphic collision-resistant hash functions, which can be constructed from either the discrete-logarithm or the short-integer-solution assumptions. The resulting RPPH achieves improved compression compared to prior constructions, and is nearly optimal.
* We also show an alternate construction of RPPH for Hamming distance under the minimal assumption that standard collision-resistant hash functions exist. The compression is slightly worse than our optimized construction using homomorphic collision-resistance, but essentially matches the prior state of the art constructions from specific algebraic assumptions.
* Lastly, we study a new notion of randomized robust PPH (R2P2H) for Hamming distance, which relaxes RPPH by allowing the hashing algorithm itself to be randomized. We give an information-theoretic construction with optimal parameters
- …