23 research outputs found

    The Right to Vote Securely

    Get PDF
    American elections currently run on outdated and vulnerable technology. Computer science researchers have shown that voting machines and other election equipment used in many jurisdictions are plagued by serious security flaws, or even shipped with basic safeguards disabled. Making matters worse, it is unclear whether current law requires election authorities or companies to fix even the most egregious vulnerabilities in their systems, and whether voters have any recourse if they do not. This Article argues that election law can, does, and should ensure that the right to vote is a right to vote securely. First, it argues that constitutional voting rights doctrines already prohibit election practices that fail to meet a bare minimum threshold of security. But the bare minimum is not enough to protect modern election infrastructure against sophisticated threats. This Article thus proposes new statutory measures to bolster election security beyond the constitutional baseline, with technical provisions designed to change the course of insecure election practices that have become regrettably commonplace, and to standardize best practices drawn from state-of-the-art research on election security

    Adaptively Secure Coin-Flipping, Revisited

    Full text link
    The full-information model was introduced by Ben-Or and Linial in 1985 to study collective coin-flipping: the problem of generating a common bounded-bias bit in a network of nn players with t=t(n)t=t(n) faults. They showed that the majority protocol can tolerate t=O(n)t=O(\sqrt n) adaptive corruptions, and conjectured that this is optimal in the adaptive setting. Lichtenstein, Linial, and Saks proved that the conjecture holds for protocols in which each player sends a single bit. Their result has been the main progress on the conjecture in the last 30 years. In this work we revisit this question and ask: what about protocols involving longer messages? Can increased communication allow for a larger fraction of faulty players? We introduce a model of strong adaptive corruptions, where in each round, the adversary sees all messages sent by honest parties and, based on the message content, decides whether to corrupt a party (and intercept his message) or not. We prove that any one-round coin-flipping protocol, regardless of message length, is secure against at most O~(n)\tilde{O}(\sqrt n) strong adaptive corruptions. Thus, increased message length does not help in this setting. We then shed light on the connection between adaptive and strongly adaptive adversaries, by proving that for any symmetric one-round coin-flipping protocol secure against tt adaptive corruptions, there is a symmetric one-round coin-flipping protocol secure against tt strongly adaptive corruptions. Returning to the standard adaptive model, we can now prove that any symmetric one-round protocol with arbitrarily long messages can tolerate at most O~(n)\tilde{O}(\sqrt n) adaptive corruptions. At the heart of our results lies a novel use of the Minimax Theorem and a new technique for converting any one-round secure protocol into a protocol with messages of polylog(n)polylog(n) bits. This technique may be of independent interest

    How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts

    Get PDF
    In this work, we examine the feasibility of secure and undetectable point-to-point communication when an adversary (e.g., a government) can read all encrypted communications of surveillance targets. We consider a model where the only permitted method of communication is via a government-mandated encryption scheme, instantiated with government-mandated keys. Parties cannot simply encrypt ciphertexts of some other encryption scheme, because citizens caught trying to communicate outside the government\u27s knowledge (e.g., by encrypting strings which do not appear to be natural language plaintexts) will be arrested. The one guarantee we suppose is that the government mandates an encryption scheme which is semantically secure against outsiders: a perhaps reasonable supposition when a government might consider it advantageous to secure its people\u27s communication against foreign entities. But then, what good is semantic security against an adversary that holds all the keys and has the power to decrypt? We show that even in the pessimistic scenario described, citizens can communicate securely and undetectably. In our terminology, this translates to a positive statement: all semantically secure encryption schemes support subliminal communication. Informally, this means that there is a two-party protocol between Alice and Bob where the parties exchange ciphertexts of what appears to be a normal conversation even to someone who knows the secret keys and thus can read the corresponding plaintexts. And yet, at the end of the protocol, Alice will have transmitted her secret message to Bob. Our security definition requires that the adversary not be able to tell whether Alice and Bob are just having a normal conversation using the mandated encryption scheme, or they are using the mandated encryption scheme for subliminal communication. Our topics may be thought to fall broadly within the realm of steganography. However, we deal with the non-standard setting of an adversarially chosen distribution of cover objects (i.e., a stronger-than-usual adversary), and we take advantage of the fact that our cover objects are ciphertexts of a semantically secure encryption scheme to bypass impossibility results which we show for broader classes of steganographic schemes. We give several constructions of subliminal communication schemes under the assumption that key exchange protocols with pseudorandom messages exist (such as Diffie-Hellman, which in fact has truly random messages)

    The Superlinearity Problem in Post-Quantum Blockchains

    Get PDF
    The proof of work mechanism by which many blockchain-based protocols achieve consensus may be undermined by the use of quantum computing in mining—even when all cryptographic primitives are replaced with post-quantum secure alternatives. First, we offer an impossibility result: we prove that quantum (Grover) speedups in solving a large, natural class of proof-of-work puzzles cause an inevitable incentive incompatibility in mining, by distorting the reward structure of mining in proof-of-work-based protocols such as Bitcoin. We refer to such distortion as the Superlinearity Problem. Our impossibility result suggests that for robust post-quantum proof-of-work-based consensus, we may need to look beyond standard cryptographic models. We thus propose a proof-of-work design in a random-beacon model, which is tailored to bypass the earlier impossibility. We conclude with a discussion of open problems, and of the challenges of integrating our new proof-of-work scheme into decentralised consensus protocols under realistic conditions

    How Practical is Public-Key Encryption Based on LPN and Ring-LPN?

    Get PDF
    We conduct a study of public-key cryptosystems based on variants of the Learning Parity with Noise (LPN) problem. The main LPN variant in consideration was introduced by Alekhnovich (FOCS 2003), and we describe several improvements to the originally proposed scheme, inspired by similar existing variants of Regev\u27s LWE-based cryptosystem. To achieve further efficiency, we propose the first public-key cryptosystem based on the ring-LPN problem, which is a more recently introduced LPN variant that makes for substantial improvement in terms of both time and space. We also introduce a variant of this problem called the transposed Ring-LPN problem. Our public-key scheme based on this problem is even more efficient. For all cases, we compute the parameters required for various security levels in practice, given the best currently known attacks. Our conclusion is that the basic LPN-based scheme is in several respects not competitive with existing practical schemes, as the public key, ciphertexts and encryption time become very large already for 80-bit security. On the other hand, the scheme based on transposed Ring-LPN is far better in all these respects. Although the public key and ciphertexts are still larger than for, say, RSA at comparable security levels, they are not prohibitively large; moreover, for decryption, the scheme outperforms RSA for security levels of 112 bits or more. The Ring-LPN based scheme is less efficient, however. Thus, LPN-based public-key cryptography seems to be somewhat more promising for practical use than has been generally assumed so far

    Data Structures Meet Cryptography: 3SUM with Preprocessing

    Full text link
    This paper shows several connections between data structure problems and cryptography against preprocessing attacks. Our results span data structure upper bounds, cryptographic applications, and data structure lower bounds, as summarized next. First, we apply Fiat--Naor inversion, a technique with cryptographic origins, to obtain a data structure upper bound. In particular, our technique yields a suite of algorithms with space SS and (online) time TT for a preprocessing version of the NN-input 3SUM problem where S3T=O~(N6)S^3\cdot T = \widetilde{O}(N^6). This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is no data structure that solves this problem for S=N2δS=N^{2-\delta} and T=N1δT = N^{1-\delta} for any constant δ>0\delta>0. Secondly, we show equivalence between lower bounds for a broad class of (static) data structure problems and one-way functions in the random oracle model that resist a very strong form of preprocessing attack. Concretely, given a random function F:[N][N]F: [N] \to [N] (accessed as an oracle) we show how to compile it into a function GF:[N2][N2]G^F: [N^2] \to [N^2] which resists SS-bit preprocessing attacks that run in query time TT where ST=O(N2ε)ST=O(N^{2-\varepsilon}) (assuming a corresponding data structure lower bound on 3SUM). In contrast, a classical result of Hellman tells us that FF itself can be more easily inverted, say with N2/3N^{2/3}-bit preprocessing in N2/3N^{2/3} time. We also show that much stronger lower bounds follow from the hardness of kSUM. Our results can be equivalently interpreted as security against adversaries that are very non-uniform, or have large auxiliary input, or as security in the face of a powerfully backdoored random oracle. Thirdly, we give non-adaptive lower bounds for 3SUM and a range of geometric problems which match the best known lower bounds for static data structure problems

    AUDIT: Practical Accountability of Secret Processes

    Get PDF
    The US federal court system is exploring ways to improve the accountability of electronic surveillance, an opaque process often involving cases sealed from public view and tech companies subject to gag orders against informing surveilled users. One judge has proposed publicly releasing some metadata about each case on a paper cover sheet as a way to balance the competing goals of (1) secrecy, so the target of an investigation does not discover and sabotage it, and (2) accountability, to assure the public that surveillance powers are not misused or abused. Inspired by the courts\u27 accountability challenge, we illustrate how accountability and secrecy are simultaneously achievable when modern cryptography is brought to bear. Our system improves configurability while preserving secrecy, offering new tradeoffs potentially more palatable to the risk-averse court system. Judges, law enforcement, and companies publish commitments to surveillance actions, argue in zero-knowledge that their behavior is consistent, and compute aggregate surveillance statistics by multi-party computation (MPC). We demonstrate that these primitives perform efficiently at the scale of the federal judiciary. To do so, we implement a hierarchical form of MPC that mirrors the hierarchy of the court system. We also develop statements in succinct zero-knowledge (SNARKs) whose specificity can be tuned to calibrate the amount of information released. All told, our proposal not only offers the court system a flexible range of options for enhancing accountability in the face of necessary secrecy, but also yields a general framework for accountability in a broader class of secret information processes

    Everything is a Race and Nakamoto Always Wins

    Get PDF
    Nakamoto invented the longest chain protocol, and claimed its security by analyzing the private double-spend attack, a race between the adversary and the honest nodes to grow a longer chain. But is it the worst attack? We answer the question in the affirmative for three classes of longest chain protocols, designed for different consensus models: 1) Nakamoto\u27s original Proof-of-Work protocol; 2) Ouroboros and SnowWhite Proof-of-Stake protocols; 3) Chia Proof-of-Space protocol. As a consequence, exact characterization of the maximum tolerable adversary power is obtained for each protocol as a function of the average block time normalized by the network delay. The security analysis of these protocols is performed in a unified manner by a novel method of reducing all attacks to a race between the adversary and the honest nodes
    corecore