3 research outputs found

    Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning

    Full text link
    Deep Learning has recently become hugely popular in machine learning, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level DP applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).Comment: ACM CCS'17, 16 pages, 18 figure

    Privacy-Oriented Cryptography (Dagstuhl Seminar 12381)

    No full text
    This report documents the program of the Dagstuhl Seminar 12381 "Privacy-Oriented Cryptography", which took place at Schloss Dagstuhl in September 16-21, 2012. Being the first Dagstuhl seminar that explicitly aimed to combine cryptography and privacy research communities, it attracted a high number of participants, many of whom were new to Dagstuhl. In total, the seminar was attended by 39 international researchers, working in different areas of cryptography and privacy, from academia, industry, and governmental organizations. The seminar included many interactive talks on novel, so-far unpublished results, aiming at the design, analysis, and practical deployment of cryptographic mechanisms for protecting privacy of users and data. The seminar featured two panel discussions to address various approaches towards provable privacy and different challenges but also success stories for practical deployment of existing cryptographic privacy-oriented techniques
    corecore