14,854 research outputs found

    Machine Learning Models that Remember Too Much

    Full text link
    Machine learning (ML) is becoming a commodity. Numerous ML frameworks and services are available to data holders who are not ML experts but want to train predictive models on their data. It is important that ML models trained on sensitive inputs (e.g., personal images or documents) not leak too much information about the training data. We consider a malicious ML provider who supplies model-training code to the data holder, does not observe the training, but then obtains white- or black-box access to the resulting model. In this setting, we design and implement practical algorithms, some of them very similar to standard ML techniques such as regularization and data augmentation, that "memorize" information about the training dataset in the model yet the model is as accurate and predictive as a conventionally trained model. We then explain how the adversary can extract memorized information from the model. We evaluate our techniques on standard ML tasks for image classification (CIFAR10), face recognition (LFW and FaceScrub), and text analysis (20 Newsgroups and IMDB). In all cases, we show how our algorithms create models that have high predictive power yet allow accurate extraction of subsets of their training data

    Stronger Attacks on Causality-Based Key Agreement

    Full text link
    Remarkably, it has been shown that in principle, security proofs for quantum key-distribution (QKD) protocols can be independent of assumptions on the devices used and even of the fact that the adversary is limited by quantum theory. All that is required instead is the absence of any hidden information flow between the laboratories, a condition that can be enforced either by shielding or by space-time causality. All known schemes for such Causal Key Distribution (CKD) that offer noise-tolerance (and, hence, must use privacy amplification as a crucial step) require multiple devices carrying out measurements in parallel on each end of the protocol, where the number of devices grows with the desired level of security. We investigate the power of the adversary for more practical schemes, where both parties each use a single device carrying out measurements consecutively. We provide a novel construction of attacks that is strictly more powerful than the best known attacks and has the potential to decide the question whether such practical CKD schemes are possible in the negative

    Quantum Cryptography Based Solely on Bell's Theorem

    Full text link
    Information-theoretic key agreement is impossible to achieve from scratch and must be based on some - ultimately physical - premise. In 2005, Barrett, Hardy, and Kent showed that unconditional security can be obtained in principle based on the impossibility of faster-than-light signaling; however, their protocol is inefficient and cannot tolerate any noise. While their key-distribution scheme uses quantum entanglement, its security only relies on the impossibility of superluminal signaling, rather than the correctness and completeness of quantum theory. In particular, the resulting security is device independent. Here we introduce a new protocol which is efficient in terms of both classical and quantum communication, and that can tolerate noise in the quantum channel. We prove that it offers device-independent security under the sole assumption that certain non-signaling conditions are satisfied. Our main insight is that the XOR of a number of bits that are partially secret according to the non-signaling conditions turns out to be highly secret. Note that similar statements have been well-known in classical contexts. Earlier results had indicated that amplification of such non-signaling-based privacy is impossible to achieve if the non-signaling condition only holds between events on Alice's and Bob's sides. Here, we show that the situation changes completely if such a separation is given within each of the laboratories.Comment: 32 pages, v2: changed introduction, added reference

    Best Effort and Practice Activation Codes

    Get PDF
    Activation Codes are used in many different digital services and known by many different names including voucher, e-coupon and discount code. In this paper we focus on a specific class of ACs that are short, human-readable, fixed-length and represent value. Even though this class of codes is extensively used there are no general guidelines for the design of Activation Code schemes. We discuss different methods that are used in practice and propose BEPAC, a new Activation Code scheme that provides both authenticity and confidentiality. The small message space of activation codes introduces some problems that are illustrated by an adaptive chosen-plaintext attack (CPA-2) on a general 3-round Feis- tel network of size 2^(2n) . This attack recovers the complete permutation from at most 2^(n+2) plaintext-ciphertext pairs. For this reason, BEPAC is designed in such a way that authenticity and confidentiality are in- dependent properties, i.e. loss of confidentiality does not imply loss of authenticity.Comment: 15 pages, 3 figures, TrustBus 201

    Secure key storage with PUFs

    Get PDF
    Nowadays, people carry around devices (cell phones, PDAs, bank passes, etc.) that have a high value. That value is often contained in the data stored in it or lies in the services the device can grant access to (by using secret identification information stored in it). These devices often operate in hostile environments and their protection level is not adequate to deal with that situation. Bank passes and credit cards contain a magnetic stripe where identification information is stored. In the case of bank passes, a PIN is additionally required to withdraw money from an ATM (Automated Teller Machine). At various occasions, it has been shown that by placing a small coil in the reader, the magnetic information stored in the stripe can easily be copied and used to produce a cloned card. Together with eavesdropping the PIN (by listening to the keypad or recording it with a camera), an attacker can easily impersonate the legitimate owner of the bank pass by using the cloned card in combination with the eavesdropped PIN

    To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression

    Get PDF
    As deep neural networks (DNNs) become widely used, pruned and quantised models are becoming ubiquitous on edge devices; such compressed DNNs are popular for lowering computational requirements.Meanwhile, recent studies show that adversarial samples can be effective at making DNNs misclassify. We, therefore, investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs. We find that adversarial samples remain transferable for both pruned and quantised models.For pruning, the adversarial samples generated from heavily pruned models remain effective on uncompressed models. For quantisation, we find the transferability of adversarial samples is highly sensitive to integer precision.Partially supported with funds from Bosch-Forschungsstiftung im Stifterverban
    • …
    corecore