47 research outputs found

    Multi-party Poisoning through Generalized pp-Tampering

    Get PDF
    In a poisoning attack against a learning algorithm, an adversary tampers with a fraction of the training data TT with the goal of increasing the classification error of the constructed hypothesis/model over the final test distribution. In the distributed setting, TT might be gathered gradually from mm data providers P1,,PmP_1,\dots,P_m who generate and submit their shares of TT in an online way. In this work, we initiate a formal study of (k,p)(k,p)-poisoning attacks in which an adversary controls k[n]k\in[n] of the parties, and even for each corrupted party PiP_i, the adversary submits some poisoned data TiT'_i on behalf of PiP_i that is still "(1p)(1-p)-close" to the correct data TiT_i (e.g., 1p1-p fraction of TiT'_i is still honestly generated). For k=mk=m, this model becomes the traditional notion of poisoning, and for p=1p=1 it coincides with the standard notion of corruption in multi-party computation. We prove that if there is an initial constant error for the generated hypothesis hh, there is always a (k,p)(k,p)-poisoning attacker who can decrease the confidence of hh (to have a small error), or alternatively increase the error of hh, by Ω(pk/m)\Omega(p \cdot k/m). Our attacks can be implemented in polynomial time given samples from the correct data, and they use no wrong labels if the original distributions are not noisy. At a technical level, we prove a general lemma about biasing bounded functions f(x1,,xn)[0,1]f(x_1,\dots,x_n)\in[0,1] through an attack model in which each block xix_i might be controlled by an adversary with marginal probability pp in an online way. When the probabilities are independent, this coincides with the model of pp-tampering attacks, thus we call our model generalized pp-tampering. We prove the power of such attacks by incorporating ideas from the context of coin-flipping attacks into the pp-tampering model and generalize the results in both of these areas

    On Oblivious Amplification of Coin-Tossing Protocols

    Get PDF
    We consider the problem of amplifying two-party coin-tossing protocols: given a protocol where it is possible to bias the common output by at most ?, we aim to obtain a new protocol where the output can be biased by at most ?* < ?. We rule out the existence of a natural type of amplifiers called oblivious amplifiers for every ?* < ?. Such amplifiers ignore the way that the underlying ?-bias protocol works and can only invoke an oracle that provides ?-bias bits. We provide two proofs of this impossibility. The first is by a reduction to the impossibility of deterministic randomness extraction from Santha-Vazirani sources. The second is a direct proof that is more general and also rules outs certain types of asymmetric amplification. In addition, it gives yet another proof for the Santha-Vazirani impossibility

    Immunization against complete subversion without random oracles

    Get PDF
    We seek constructions of general-purpose immunizers that take arbitrary cryptographic primitives, and transform them into ones that withstand a powerful “malicious but proud” adversary, who attempts to break security by possibly subverting the implementation of all algorithms (including the immunizer itself!), while trying not to be detected. This question is motivated by the recent evidence of cryptographic schemes being intentionally weakened, or designed together with hidden backdoors, e.g., with the scope of mass surveillance. Our main result is a subversion-secure immunizer in the plain model, that works for a fairly large class of deterministic primitives, i.e. cryptoschemes where a secret (but tamperable) random source is used to generate the keys and the public parameters, whereas all other algorithms are deterministic. The immunizer relies on an additional independent source of public randomness, which is used to sample a public seed. Assuming the public source is untamperable, and that the subversion of the algorithms is chosen independently of the seed, we can instantiate our immunizer from any one-way function. In case the subversion is allowed to depend on the seed, and the public source is still untamperable, we obtain an instantiation from collision-resistant hash functions. In the more challenging scenario where the public source is also tamperable, we additionally need to assume that the initial cryptographic primitive has sub-exponential security. Previous work in the area only obtained subversion-secure immunization for very restricted classes of primitives, often in weaker models of subversion and using random oracles

    Blockwise pp-Tampering Attacks on Cryptographic Primitives, Extractors, and Learners

    Get PDF
    Austrin, Chung, Mahmoody, Pass and Seth (Crypto\u2714) studied the notion of bitwise pp-tampering attacks over randomized algorithms in which an efficient `virus\u27 gets to control each bit of the randomness with independent probability pp in an online way. The work of Austrin et al. showed how to break certain `privacy primitives\u27 (e.g., encryption, commitments, etc.) through bitwise pp-tampering, by giving a bitwise pp-tampering biasing attack for increasing the average E[f(Un)]E[f(U_n)] of any efficient function f ⁣:{0,1}n[1,+1]f \colon \{0,1\}^n \to [-1,+1] by Ω(pVar[f(Un)])\Omega(p \cdot Var[f(U_n)]) where Var[f(Un)]Var[f(U_n)] is the variance of f(Un)f(U_n). In this work, we revisit and extend the bitwise tampering model of Austrin et al. to blockwise setting, where blocks of randomness becomes tamperable with independent probability pp. Our main result is an efficient blockwise pp-tampering attack to bias the average E[f(X)]E[f(X)] of any efficient function ff mapping arbitrary XX to [1,+1][-1,+1] by Ω(pVar[f(X)])\Omega(p \cdot Var[f(X)]) regardless of how XX is partitioned into individually tamperable blocks X=(X1,,Xn)X=(X_1,\dots,X_n). Relying on previous works, our main biasing attack immediately implies efficient attacks against the privacy primitives as well as seedless multi-source extractors, in a model where the attacker gets to tamper with each block (or source) of the randomness with independent probability pp. Further, we show how to increase the classification error of deterministic learners in the so called `targeted poisoning\u27 attack model under Valiant\u27s adversarial noise. In this model, an attacker has a `target\u27 test data dd in mind and wishes to increase the error of classifying dd while she gets to tamper with each training example with independent probability pp an in an online way

    The chaining lemma and its application

    Get PDF
    We present a new information-theoretic result which we call the Chaining Lemma. It considers a so-called “chain” of random variables, defined by a source distribution X(0)with high min-entropy and a number (say, t in total) of arbitrary functions (T1,…, Tt) which are applied in succession to that source to generate the chain (Formula presented). Intuitively, the Chaining Lemma guarantees that, if the chain is not too long, then either (i) the entire chain is “highly random”, in that every variable has high min-entropy; or (ii) it is possible to find a point j (1 ≤ j ≤ t) in the chain such that, conditioned on the end of the chain i.e. (Formula presented), the preceding part (Formula presented) remains highly random. We think this is an interesting information-theoretic result which is intuitive but nevertheless requires rigorous case-analysis to prove. We believe that the above lemma will find applications in cryptography. We give an example of this, namely we show an application of the lemma to protect essentially any cryptographic scheme against memory tampering attacks. We allow several tampering requests, the tampering functions can be arbitrary, however, they must be chosen from a bounded size set of functions that is fixed a prior

    How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts

    Get PDF
    In this work, we examine the feasibility of secure and undetectable point-to-point communication when an adversary (e.g., a government) can read all encrypted communications of surveillance targets. We consider a model where the only permitted method of communication is via a government-mandated encryption scheme, instantiated with government-mandated keys. Parties cannot simply encrypt ciphertexts of some other encryption scheme, because citizens caught trying to communicate outside the government\u27s knowledge (e.g., by encrypting strings which do not appear to be natural language plaintexts) will be arrested. The one guarantee we suppose is that the government mandates an encryption scheme which is semantically secure against outsiders: a perhaps reasonable supposition when a government might consider it advantageous to secure its people\u27s communication against foreign entities. But then, what good is semantic security against an adversary that holds all the keys and has the power to decrypt? We show that even in the pessimistic scenario described, citizens can communicate securely and undetectably. In our terminology, this translates to a positive statement: all semantically secure encryption schemes support subliminal communication. Informally, this means that there is a two-party protocol between Alice and Bob where the parties exchange ciphertexts of what appears to be a normal conversation even to someone who knows the secret keys and thus can read the corresponding plaintexts. And yet, at the end of the protocol, Alice will have transmitted her secret message to Bob. Our security definition requires that the adversary not be able to tell whether Alice and Bob are just having a normal conversation using the mandated encryption scheme, or they are using the mandated encryption scheme for subliminal communication. Our topics may be thought to fall broadly within the realm of steganography. However, we deal with the non-standard setting of an adversarially chosen distribution of cover objects (i.e., a stronger-than-usual adversary), and we take advantage of the fact that our cover objects are ciphertexts of a semantically secure encryption scheme to bypass impossibility results which we show for broader classes of steganographic schemes. We give several constructions of subliminal communication schemes under the assumption that key exchange protocols with pseudorandom messages exist (such as Diffie-Hellman, which in fact has truly random messages)

    Efficient public-key cryptography with bounded leakage and tamper resilience

    Get PDF
    We revisit the question of constructing public-key encryption and signature schemes with security in the presence of bounded leakage and tampering memory attacks. For signatures we obtain the first construction in the standard model; for public-key encryption we obtain the first construction free of pairing (avoiding non-interactive zero-knowledge proofs). Our constructions are based on generic building blocks, and, as we show, also admit efficient instantiations under fairly standard number-theoretic assumptions. The model of bounded tamper resistance was recently put forward by Damgård et al. (Asiacrypt 2013) as an attractive path to achieve security against arbitrary memory tampering attacks without making hardware assumptions (such as the existence of a protected self-destruct or key-update mechanism), the only restriction being on the number of allowed tampering attempts (which is a parameter of the scheme). This allows to circumvent known impossibility results for unrestricted tampering (Gennaro et al., TCC 2010), while still being able to capture realistic tampering attack

    Tamper resilient circuits

    Get PDF
    Η εύρεση αποτελεσματικών αλγορίθμων προστασίας λογικών κυκλωμάτων, τα οποία υλοποιούν κρυπτογραφικά συστήματα εκτεθειμένα σε φυσικές επιθέσεις, αποτελεί ένα από τα ανοιχτά προβλήματα της σύγχρονης κρυπτογραφίας. Συγκεκριμένα, θεωρούμε ότι το κύκλωμα αναπαριστάται από ένα κατευθυνόμενο ακυκλικό γράφημα G(V, E), κάθε κόμβος του οποίου αντιστοιχεί σε μια λογική πύλη ή είναι κόμβος εισόδου ή κόμβος εξόδου, και κάθε ακμή αντιστοιχεί σε ένα καλώδιο του κυκλώματος. Επιπλέον, το γράφημα αποτελείται από ένα σύνολο κόμβων V&apos;, οι οποίοι αναπαριστούν το μυστικό κλειδί του κρυπτογραφικού αλγορίθμου. Ως υπολογισμό ορίζουμε την κατά πλάτος διάσχυση του γραφήματος. Το μοντέλο ασφάλειας θεωρεί αντιπάλους οι οποίοι δύνανται να αλλοιώσουν τον υπολογισμό αλληλεπιδρώντας με στοιχεία απο το σύνολο EVE \cup V, με απώτερο σκοπό την εξαγωγή του μυστικού κλειδιού. Στόχος, λοιπόν, είναι η εύρεση αποδοτικών αλγορίθμων προστασίας του υπολογισμού, μέσω του μετασχηματισμού του γραφήματος G σε ένα γράφημα G&apos; το οποίο θα πληροί τις ακόλουθες ιδιότητες: (i) o υπολογισμός που αναπαριστάται από το γράφημα G ταυτίζεται με εκείνον του G&apos;, (ii) με μεγάλη πιθανότητα η επίθεση ενός αντιπάλου θα γίνει αντιληπτή από τον υπολογισμό και θα οδηγήσει σε διαγραφή του κρυπτογραφικού κλειδιού. Σκοπός, λοιπόν, της παρούσας διπλωματικής εργασίας είναι η θεωρητική μελέτη και κατασκευή, αποδοτικών μετασχηματισμών προστασίας κυκλωμάτων εναντίον επιθέσεων στην υλοποίηση.This dissertation studies the effect of gate-tampering attacks against cryptographic circuits. The proposed adversarial model is motivated by the plausibility of tampering directly with circuit gates and by the increasing use of tamper resilient gates among the known constructions that are shown to be resilient against wire-tampering adversaries. We prove that gate-tampering is strictly stronger than wire-tampering. On the one hand, we show that there is a gate-tampering strategy that perfectly simulates any given wire-tampering strategy. On the other, we construct families of circuits over which it is impossible for any wire-tampering attacker to simulate a certain gate-tampering attack (that we explicitly construct). We also provide a tamper resilience impossibility result that applies to both gate and wire tampering adversaries and relates the amount of tampering to the depth of the circuit. Finally, we show that defending against gate-tampering attacks is feasible by appropriately abstracting and analyzing the circuit compiler of Ishai et al. in a manner which may be of independent interest. Specifically, we first introduce a class of compilers that, assuming certain well defined tamper resilience characteristics against a specific class of attackers, can be shown to produce tamper resilient circuits against that same class of attackers. Then, we describe a compiler in this class for which we prove that it possesses the necessary tamper-resilience characteristics against gate-tampering attackers

    Privacy Amplification with Tamperable Memory via Non-malleable Two-source Extractors

    Get PDF
    We extend the classical problem of privacy amplification to a setting where the active adversary, Eve, is also allowed to fully corrupt the internal memory (which includes the shared randomness, and local randomness tape) of one of the honest parties, Alice and Bob, before the execution of the protocol. We require that either one of Alice or Bob detects tampering, or they agree on a shared key that is indistinguishable from the uniform distribution to Eve. We obtain the following results: (1) We give a privacy amplification protocol via low-error non-malleable two-source extractors with one source having low min-entropy. In particular, this implies the existence of such (non-efficient) protocols; (2) We show that even slight improvements to the state-of-the-art explicit non-malleable two-source extractors would lead to explicit low-error, low min-entropy two-source extractors, thereby resolving a long-standing open question. This suggests that obtaining (information-theoretically secure) explicit non-malleable two-source extractors for (1) might be hard; (3) We present explicit constructions of low-error, low min-entropy non-malleable two-source extractors in the CRS model of (Garg, Kalai, Khurana, Eurocrypt 2020), assuming either the quasi-polynomial hardness of DDH or the existence of nearly-optimal collision-resistant hash functions; (4) We instantiate our privacy amplification protocol with the above mentioned non-malleable two-source extractors in the CRS model, leading to explicit, computationally-secure protocols. This is not immediate from (1) because in the computational setting we need to make sure that, in particular, all randomness sources remain samplable throughout the proof. This requires upgrading the assumption of quasi-polynomial hardness of DDH to sub-exponential hardness of DDH. We emphasize that each of the first three results can be read independently
    corecore