18 research outputs found

    Foundations, Properties, and Security Applications of Puzzles: A Survey

    Full text link
    Cryptographic algorithms have been used not only to create robust ciphertexts but also to generate cryptograms that, contrary to the classic goal of cryptography, are meant to be broken. These cryptograms, generally called puzzles, require the use of a certain amount of resources to be solved, hence introducing a cost that is often regarded as a time delay---though it could involve other metrics as well, such as bandwidth. These powerful features have made puzzles the core of many security protocols, acquiring increasing importance in the IT security landscape. The concept of a puzzle has subsequently been extended to other types of schemes that do not use cryptographic functions, such as CAPTCHAs, which are used to discriminate humans from machines. Overall, puzzles have experienced a renewed interest with the advent of Bitcoin, which uses a CPU-intensive puzzle as proof of work. In this paper, we provide a comprehensive study of the most important puzzle construction schemes available in the literature, categorizing them according to several attributes, such as resource type, verification type, and applications. We have redefined the term puzzle by collecting and integrating the scattered notions used in different works, to cover all the existing applications. Moreover, we provide an overview of the possible applications, identifying key requirements and different design approaches. Finally, we highlight the features and limitations of each approach, providing a useful guide for the future development of new puzzle schemes.Comment: This article has been accepted for publication in ACM Computing Survey

    Cryptocurrency Egalitarianism: A Quantitative Approach

    Get PDF
    Since the invention of Bitcoin one decade ago, numerous cryptocurrencies have sprung into existence. Among these, proof-of-work is the most common mechanism for achieving consensus, whilst a number of coins have adopted "ASIC-resistance" as a desirable property, claiming to be more "egalitarian,"S where egalitarianism refers to the power of each coin to participate in the creation of new coins. While proof-of-work consensus dominates the space, several new cryptocurrencies employ alternative consensus, such as proof-of-stake in which block minting opportunities are based on monetary ownership. A core criticism of proof-of-stake revolves around it being less egalitarian by making the rich richer, as opposed to proof-of-work in which everyone can contribute equally according to their computational power. In this paper, we give the first quantitative definition of a cryptocurrency's \emph{egalitarianism}. Based on our definition, we measure the egalitarianism of popular cryptocurrencies that (may or may not) employ ASIC-resistance, among them Bitcoin, Ethereum, Litecoin, and Monero. Our simulations show, as expected, that ASIC-resistance increases a cryptocurrency's egalitarianism. We also measure the egalitarianism of a stake-based protocol, Ouroboros, and a hybrid proof-of-stake/proof-of-work cryptocurrency, Decred. We show that stake-based cryptocurrencies, under correctly selected parameters, can be perfectly egalitarian, perhaps contradicting folklore belief.Comment: 29 pages, 4 figures, Tokenomics 201

    Symmetrically and Asymmetrically Hard Cryptography

    Get PDF
    International audienceThe main efficiency metrics for a cryptographic primitive are its speed, its code size and its memory complexity. For a variety of reasons, many algorithms have been proposed that, instead of optimizing, try to increase one of these hardness forms.We present for the first time a unified framework for describing the hardness of a primitive along any of these three axes: code-hardness, time-hardness and memory-hardness. This unified view allows us to present modular block cipher and sponge constructions which can have any of the three forms of hardness and can be used to build any higher level symmetric primitive: hash function, PRNG, etc. We also formalize a new concept: asymmetric hardness. It creates two classes of users: common users have to compute a function with a certain hardness while users knowing a secret can compute the same function in a far cheaper way. Functions with such an asymmetric hardness can be directly used in both our modular structures, thus constructing any symmetric primitive with an asymmetric hardness. We also propose the first asymmetrically memory-hard function, Diodon.As illustrations of our framework, we introduce Whale and Skipper. Whale is a code-hard hash function which could be used as a key derivation function and Skipper is the first asymmetrically time-hard block cipher

    White-Box and Asymmetrically Hard Crypto Design

    Get PDF
    In this talk we surveyed some our recent works related to the area of white-box cryptogaphy. Specifically the resource hardness framework from Asiacrypt'2017 and its relation to the incompressibility and weak-WBC

    Time-Memory Tradeoff Attacks on the MTP Proof-of-Work Scheme

    Get PDF
    Proof-of-work (PoW) schemes are cryptographic primitives with numerous applications, and in particular, they play a crucial role in maintaining consensus in cryptocurrency networks. Ideally, a cryptocurrency PoW scheme should have several desired properties, including efficient verification on one hand, and high memory consumption of the prover\u27s algorithm on the other hand, making the scheme less attractive for implementation on dedicated hardware. At the USENIX Security Symposium 2016, Biryukov and Khovratovich presented a new promising PoW scheme called MTP (Merkle Tree Proof) that achieves essentially all desired PoW properties. As a result, MTP has received substantial attention from the cryptocurrency community. The scheme uses a Merkle hash tree construction over a large array of blocks computed by a memory consuming (memory-hard) function. Despite the fact that only a small fraction of the memory is verified by the efficient verification algorithm, the designers claim that a cheating prover that uses a small amount of memory will suffer from a significant computational penalty. In this paper, we devise a sub-linear computation-memory tradeoff attack on MTP. We apply our attack to the concrete instance proposed by the designers which uses the memory-hard function Argon2d and computes a proof by allocating 2 gigabytes of memory. The attack computes arbitrary malicious proofs using less than a megabyte of memory (about 1/3000 of the honest prover\u27s memory) at a relatively mild penalty of 170 in computation. This is more than 55,000 times faster than what is claimed by the designers. The attack requires a one-time precomputation step of complexity 2642^{64}, but its online cost is only increased by a factor which is less than 2 when spending 2482^{48} precomputation time. The main idea of the attack is to exploit the fact that Argon2d accesses its memory in a way which is determined by its previous computations. This allows to inject a small fraction of carefully selected memory blocks that manipulate Argon2d\u27s memory access patterns, significantly weakening its memory-hardness

    IoT Expunge: Implementing Verifiable Retention of IoT Data

    Full text link
    The growing deployment of Internet of Things (IoT) systems aims to ease the daily life of end-users by providing several value-added services. However, IoT systems may capture and store sensitive, personal data about individuals in the cloud, thereby jeopardizing user-privacy. Emerging legislation, such as California's CalOPPA and GDPR in Europe, support strong privacy laws to protect an individual's data in the cloud. One such law relates to strict enforcement of data retention policies. This paper proposes a framework, entitled IoT Expunge that allows sensor data providers to store the data in cloud platforms that will ensure enforcement of retention policies. Additionally, the cloud provider produces verifiable proofs of its adherence to the retention policies. Experimental results on a real-world smart building testbed show that IoT Expunge imposes minimal overheads to the user to verify the data against data retention policies.Comment: This paper has been accepted in 10th ACM Conference on Data and Application Security and Privacy (CODASPY), 202

    PURED: A unified framework for resource-hard functions

    Get PDF
    Algorithm hardness can be described by 5 categories: hardness in computation, in sequential computation, in memory, in energy consumption (or bandwidth), in code size. Similarly, hardness can be a concern for solving or for verifying, depending on the context, and can depend on a secret trapdoor or be universally hard. Two main lines of research investigated such problems: cryptographic puzzles, that gained popularity thanks to blockchain consensus systems (where solving must be moderately hard, and verification either public or private), and white box cryptography (where solving must be hard without knowledge of the secret key). In this work, we improve upon the classification framework proposed by Biryukov and Perrin in Asiacypt 2017 and offer a united hardness framework, PURED, that can be used for measuring all these kinds of hardness, both in solving and verifying. We also propose three new constructions that fill gaps previously uncovered by the literature (namely, trapdoor proof of CMC, trapdoor proof of code, and a hard challenge in sequential time trapdoored in verification), and analyse their hardness in the PURED framework
    corecore