10 research outputs found
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Deep Learning has recently become hugely popular in machine learning,
providing significant improvements in classification accuracy in the presence
of highly-structured and large databases.
Researchers have also considered privacy implications of deep learning.
Models are typically trained in a centralized manner with all the data being
processed by the same training algorithm. If the data is a collection of users'
private data, including habits, personal pictures, geographical positions,
interests, and more, the centralized server will have access to sensitive
information that could potentially be mishandled. To tackle this problem,
collaborative deep learning models have recently been proposed where parties
locally train their deep learning structures and only share a subset of the
parameters in the attempt to keep their respective training sets private.
Parameters can also be obfuscated via differential privacy (DP) to make
information extraction even more challenging, as proposed by Shokri and
Shmatikov at CCS'15.
Unfortunately, we show that any privacy-preserving collaborative deep
learning is susceptible to a powerful attack that we devise in this paper. In
particular, we show that a distributed, federated, or decentralized deep
learning approach is fundamentally broken and does not protect the training
sets of honest participants. The attack we developed exploits the real-time
nature of the learning process that allows the adversary to train a Generative
Adversarial Network (GAN) that generates prototypical samples of the targeted
training set that was meant to be private (the samples generated by the GAN are
intended to come from the same distribution as the training data).
Interestingly, we show that record-level DP applied to the shared parameters of
the model, as suggested in previous work, is ineffective (i.e., record-level DP
is not designed to address our attack).Comment: ACM CCS'17, 16 pages, 18 figure
FedComm: Federated Learning as a Medium for Covert Communication
Proposed as a solution to mitigate the privacy implications related to the
adoption of deep learning, Federated Learning (FL) enables large numbers of
participants to successfully train deep neural networks without having to
reveal the actual private training data. To date, a substantial amount of
research has investigated the security and privacy properties of FL, resulting
in a plethora of innovative attack and defense strategies. This paper
thoroughly investigates the communication capabilities of an FL scheme. In
particular, we show that a party involved in the FL learning process can use FL
as a covert communication medium to send an arbitrary message. We introduce
FedComm, a novel multi-system covert-communication technique that enables
robust sharing and transfer of targeted payloads within the FL framework. Our
extensive theoretical and empirical evaluations show that FedComm provides a
stealthy communication channel, with minimal disruptions to the training
process. Our experiments show that FedComm successfully delivers 100% of a
payload in the order of kilobits before the FL procedure converges. Our
evaluation also shows that FedComm is independent of the application domain and
the neural network architecture used by the underlying FL scheme.Comment: 18 page
No Place to Hide that Bytes won't Reveal: Sniffing Location-Based Encrypted Traffic to Track a User's Position
News reports of the last few years indicated that several intelligence
agencies are able to monitor large networks or entire portions of the Internet
backbone. Such a powerful adversary has only recently been considered by the
academic literature. In this paper, we propose a new adversary model for
Location Based Services (LBSs). The model takes into account an unauthorized
third party, different from the LBS provider itself, that wants to infer the
location and monitor the movements of a LBS user. We show that such an
adversary can extrapolate the position of a target user by just analyzing the
size and the timing of the encrypted traffic exchanged between that user and
the LBS provider. We performed a thorough analysis of a widely deployed
location based app that comes pre-installed with many Android devices:
GoogleNow. The results are encouraging and highlight the importance of devising
more effective countermeasures against powerful adversaries to preserve the
privacy of LBS users.Comment: 14 pages, 9th International Conference on Network and System Security
(NSS 2015
Adversarial Scratches: Deployable Attacks to CNN Classifiers
A growing body of work has shown that deep neural networks are susceptible to
adversarial examples. These take the form of small perturbations applied to the
model's input which lead to incorrect predictions. Unfortunately, most
literature focuses on visually imperceivable perturbations to be applied to
digital images that often are, by design, impossible to be deployed to physical
targets. We present Adversarial Scratches: a novel L0 black-box attack, which
takes the form of scratches in images, and which possesses much greater
deployability than other state-of-the-art attacks. Adversarial Scratches
leverage B\'ezier Curves to reduce the dimension of the search space and
possibly constrain the attack to a specific location. We test Adversarial
Scratches in several scenarios, including a publicly available API and images
of traffic signs. Results show that, often, our attack achieves higher fooling
rate than other deployable state-of-the-art methods, while requiring
significantly fewer queries and modifying very few pixels.Comment: This paper stems from 'Scratch that! An Evolution-based Adversarial
Attack against Neural Networks' for which an arXiv preprint is available at
arXiv:1912.02316. Further studies led to a complete overhaul of the work,
resulting in this paper. This work was submitted for review in Pattern
Recognition (Elsevier
Using gravitational force in terrain optimization problems
Optimization problems represent algorithms designed for difficult problems which may require huge amount of space or computational time. Such kind of algorithms bring out solutions which are optimal and at the same time closer to the real life environments
where not everything is precise and the possibility of errors is present at every instant. Finding the minimal point in a terrain is a challenge of its own, especially when we are dealing with an unknown area. In order to tackle this problem, we thought
of making use of gravitational force, since it is proportionally related to earth center proximity. In an unknown terrain, we spread our agents that are capable to communicate information to one another at randomly generated positions. Later on, each of these agents calculates the gravity variation with altitude at its respective position. Since, we were looking to find the optimal minimum point in the terrain, after the gravity variation with altitude is computed by each agent, the highest gravity is found. This is communicated to the other agents as well and they start moving toward the agent that is currently found at an area with high gravity variation. The agents move toward the high gravity agent with a certain heuristic coefficient. During their path they may encounter other terrain points where the gravitational force is stronger, which would cause a change in the path of other agents making them move toward the newly found position. This is done until an optimal minimum is found by the agents. Our test results so far have been very promising. We aim to develop the algorithm furthermore in order to increase its efficiency and efficacy. We strongly believe that such an algorithm can be used to reach in the unexplored areas of ocean floor, or searching for minerals by minimizing the area of search in an optimal timeOptimization problems represent algorithms designed for difficult problems which may require huge amount of space or computational time. Such kind of algorithms bring out solutions which are optimal and at the same time closer to the real life environments
where not everything is precise and the possibility of errors is present at every instant. Finding the minimal point in a terrain is a challenge of its own, especially when we are dealing with an unknown area. In order to tackle this problem, we thought
of making use of gravitational force, since it is proportionally related to earth center proximity. In an unknown terrain, we spread our agents that are capable to communicate information to one another at randomly generated positions. Later on, each of these agents calculates the gravity variation with altitude at its respective position. Since, we were looking to find the optimal minimum point in the terrain, after the gravity variation with altitude is computed by each agent, th
MaleficNet: Hiding Malware into Deep Neural Networks Using Spread-Spectrum Channel Coding
<p>The training and development of good deep learning models is often a challenging task, thus leading individuals (developers, researchers, and practitioners alike) to use third-party models residing in public repositories, fine-tuning these models to their needs usually with little-to-no effort. Despite its undeniable benefits, this practice can lead to new attack vectors. In this paper, we demonstrate the feasibility and effectiveness of one such attack, namely malware embedding in deep learning models. We push the boundaries of current state-of-the-art by introducing MaleficNet, a technique that combines spread-spectrum channel coding with error correction techniques, injecting malicious payloads in the parameters of deep neural networks, all while causing no degradation to the model's performance and successfully bypassing state-of-the-art detection and removal mechanisms. We believe this work will raise awareness against these new, dangerous, camouflaged threats, assist the research community and practitioners in evaluating the capabilities of modern machine learning architectures, and pave the way to research targeting the detection and mitigation of such threats.</p>
Adversarial scratches: Deployable attacks to CNN classifiers
A growing body of work has shown that deep neural networks are susceptible to adversarial examples. These take the form of small perturbations applied to the model’s input which lead to incorrect predictions. Unfortunately, most literature focuses on visually imperceivable perturbations to be applied to digital images that often are, by design, impossible to be deployed to physical targets.
We present Adversarial Scratches: a novel L0 black-box attack, which takes the form of scratches in images, and which possesses much greater deployability than other state-of-the-art attacks. Adversarial Scratches leverage BĂ©zier Curves to reduce the dimension of the search space and possibly constrain the attack to a specific location.
We test Adversarial Scratches in several scenarios, including a publicly available API and images of traffic signs. Results show that our attack achieves higher fooling rate than other deployable state-of-the-art methods, while requiring significantly fewer queries and modifying very few pixels