1,795 research outputs found
The Adversarial Implications of Variable-Time Inference
Machine learning (ML) models are known to be vulnerable to a number of
attacks that target the integrity of their predictions or the privacy of their
training data. To carry out these attacks, a black-box adversary must typically
possess the ability to query the model and observe its outputs (e.g., labels).
In this work, we demonstrate, for the first time, the ability to enhance such
decision-based attacks. To accomplish this, we present an approach that
exploits a novel side channel in which the adversary simply measures the
execution time of the algorithm used to post-process the predictions of the ML
model under attack. The leakage of inference-state elements into algorithmic
timing side channels has never been studied before, and we have found that it
can contain rich information that facilitates superior timing attacks that
significantly outperform attacks based solely on label outputs. In a case
study, we investigate leakage from the non-maximum suppression (NMS) algorithm,
which plays a crucial role in the operation of object detectors. In our
examination of the timing side-channel vulnerabilities associated with this
algorithm, we identified the potential to enhance decision-based attacks. We
demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage
to successfully evade object detection using adversarial examples, and perform
dataset inference. Our experiments show that our adversarial examples exhibit
superior perturbation quality compared to a decision-based attack. In addition,
we present a new threat model in which dataset inference based solely on timing
leakage is performed. To address the timing leakage vulnerability inherent in
the NMS algorithm, we explore the potential and limitations of implementing
constant-time inference passes as a mitigation strategy
PerfWeb: How to Violate Web Privacy with Hardware Performance Events
The browser history reveals highly sensitive information about users, such as
financial status, health conditions, or political views. Private browsing modes
and anonymity networks are consequently important tools to preserve the privacy
not only of regular users but in particular of whistleblowers and dissidents.
Yet, in this work we show how a malicious application can infer opened websites
from Google Chrome in Incognito mode and from Tor Browser by exploiting
hardware performance events (HPEs). In particular, we analyze the browsers'
microarchitectural footprint with the help of advanced Machine Learning
techniques: k-th Nearest Neighbors, Decision Trees, Support Vector Machines,
and in contrast to previous literature also Convolutional Neural Networks. We
profile 40 different websites, 30 of the top Alexa sites and 10 whistleblowing
portals, on two machines featuring an Intel and an ARM processor. By monitoring
retired instructions, cache accesses, and bus cycles for at most 5 seconds, we
manage to classify the selected websites with a success rate of up to 86.3%.
The results show that hardware performance events can clearly undermine the
privacy of web users. We therefore propose mitigation strategies that impede
our attacks and still allow legitimate use of HPEs
- …