2,767 research outputs found
Exploiting vulnerabilities of deep neural networks for privacy protection
Adversarial perturbations can be added to images to protect their content from unwanted inferences. These perturbations may, however, be ineffective against classifiers that were not seen during the generation of the perturbation, or against defenses based on re-quantization, median filtering or JPEG compression. To address these limitations, we present an adversarial attack that is specifically designed to protect visual content against unseen classifiers and known defenses. We craft perturbations using an iterative process that is based on the Fast Gradient Signed Method and that randomly selects a classifier and a defense, in each iteration. This randomization prevents an undesirable overfitting to a specific classifier or defense. We validate the proposed attack in both targeted and untargeted settings on the private classes of the Places365-Standard dataset. Using ResNet18, ResNet50, AlexNet and DenseNet161 as classifiers, the performance of the proposed attack exceeds that of eleven state-of-the-art attacks
Machine Learning in IoT Security:Current Solutions and Future Challenges
The future Internet of Things (IoT) will have a deep economical, commercial
and social impact on our lives. The participating nodes in IoT networks are
usually resource-constrained, which makes them luring targets for cyber
attacks. In this regard, extensive efforts have been made to address the
security and privacy issues in IoT networks primarily through traditional
cryptographic approaches. However, the unique characteristics of IoT nodes
render the existing solutions insufficient to encompass the entire security
spectrum of the IoT networks. This is, at least in part, because of the
resource constraints, heterogeneity, massive real-time data generated by the
IoT devices, and the extensively dynamic behavior of the networks. Therefore,
Machine Learning (ML) and Deep Learning (DL) techniques, which are able to
provide embedded intelligence in the IoT devices and networks, are leveraged to
cope with different security problems. In this paper, we systematically review
the security requirements, attack vectors, and the current security solutions
for the IoT networks. We then shed light on the gaps in these security
solutions that call for ML and DL approaches. We also discuss in detail the
existing ML and DL solutions for addressing different security problems in IoT
networks. At last, based on the detailed investigation of the existing
solutions in the literature, we discuss the future research directions for ML-
and DL-based IoT security
PerfWeb: How to Violate Web Privacy with Hardware Performance Events
The browser history reveals highly sensitive information about users, such as
financial status, health conditions, or political views. Private browsing modes
and anonymity networks are consequently important tools to preserve the privacy
not only of regular users but in particular of whistleblowers and dissidents.
Yet, in this work we show how a malicious application can infer opened websites
from Google Chrome in Incognito mode and from Tor Browser by exploiting
hardware performance events (HPEs). In particular, we analyze the browsers'
microarchitectural footprint with the help of advanced Machine Learning
techniques: k-th Nearest Neighbors, Decision Trees, Support Vector Machines,
and in contrast to previous literature also Convolutional Neural Networks. We
profile 40 different websites, 30 of the top Alexa sites and 10 whistleblowing
portals, on two machines featuring an Intel and an ARM processor. By monitoring
retired instructions, cache accesses, and bus cycles for at most 5 seconds, we
manage to classify the selected websites with a success rate of up to 86.3%.
The results show that hardware performance events can clearly undermine the
privacy of web users. We therefore propose mitigation strategies that impede
our attacks and still allow legitimate use of HPEs
Tiresias: Predicting Security Events Through Deep Learning
With the increased complexity of modern computer attacks, there is a need for
defenders not only to detect malicious activity as it happens, but also to
predict the specific steps that will be taken by an adversary when performing
an attack. However this is still an open research problem, and previous
research in predicting malicious events only looked at binary outcomes (e.g.,
whether an attack would happen or not), but not at the specific steps that an
attacker would undertake. To fill this gap we present Tiresias, a system that
leverages Recurrent Neural Networks (RNNs) to predict future events on a
machine, based on previous observations. We test Tiresias on a dataset of 3.4
billion security events collected from a commercial intrusion prevention
system, and show that our approach is effective in predicting the next event
that will occur on a machine with a precision of up to 0.93. We also show that
the models learned by Tiresias are reasonably stable over time, and provide a
mechanism that can identify sudden drops in precision and trigger a retraining
of the system. Finally, we show that the long-term memory typical of RNNs is
key in performing event prediction, rendering simpler methods not up to the
task
- …