3,278 research outputs found
How Unique is Your .onion? An Analysis of the Fingerprintability of Tor Onion Services
Recent studies have shown that Tor onion (hidden) service websites are
particularly vulnerable to website fingerprinting attacks due to their limited
number and sensitive nature. In this work we present a multi-level feature
analysis of onion site fingerprintability, considering three state-of-the-art
website fingerprinting methods and 482 Tor onion services, making this the
largest analysis of this kind completed on onion services to date.
Prior studies typically report average performance results for a given
website fingerprinting method or countermeasure. We investigate which sites are
more or less vulnerable to fingerprinting and which features make them so. We
find that there is a high variability in the rate at which sites are classified
(and misclassified) by these attacks, implying that average performance figures
may not be informative of the risks that website fingerprinting attacks pose to
particular sites.
We analyze the features exploited by the different website fingerprinting
methods and discuss what makes onion service sites more or less easily
identifiable, both in terms of their traffic traces as well as their webpage
design. We study misclassifications to understand how onion service sites can
be redesigned to be less vulnerable to website fingerprinting attacks. Our
results also inform the design of website fingerprinting countermeasures and
their evaluation considering disparate impact across sites.Comment: Accepted by ACM CCS 201
PerfWeb: How to Violate Web Privacy with Hardware Performance Events
The browser history reveals highly sensitive information about users, such as
financial status, health conditions, or political views. Private browsing modes
and anonymity networks are consequently important tools to preserve the privacy
not only of regular users but in particular of whistleblowers and dissidents.
Yet, in this work we show how a malicious application can infer opened websites
from Google Chrome in Incognito mode and from Tor Browser by exploiting
hardware performance events (HPEs). In particular, we analyze the browsers'
microarchitectural footprint with the help of advanced Machine Learning
techniques: k-th Nearest Neighbors, Decision Trees, Support Vector Machines,
and in contrast to previous literature also Convolutional Neural Networks. We
profile 40 different websites, 30 of the top Alexa sites and 10 whistleblowing
portals, on two machines featuring an Intel and an ARM processor. By monitoring
retired instructions, cache accesses, and bus cycles for at most 5 seconds, we
manage to classify the selected websites with a success rate of up to 86.3%.
The results show that hardware performance events can clearly undermine the
privacy of web users. We therefore propose mitigation strategies that impede
our attacks and still allow legitimate use of HPEs
Automated Website Fingerprinting through Deep Learning
Several studies have shown that the network traffic that is generated by a
visit to a website over Tor reveals information specific to the website through
the timing and sizes of network packets. By capturing traffic traces between
users and their Tor entry guard, a network eavesdropper can leverage this
meta-data to reveal which website Tor users are visiting. The success of such
attacks heavily depends on the particular set of traffic features that are used
to construct the fingerprint. Typically, these features are manually engineered
and, as such, any change introduced to the Tor network can render these
carefully constructed features ineffective. In this paper, we show that an
adversary can automate the feature engineering process, and thus automatically
deanonymize Tor traffic by applying our novel method based on deep learning. We
collect a dataset comprised of more than three million network traces, which is
the largest dataset of web traffic ever used for website fingerprinting, and
find that the performance achieved by our deep learning approaches is
comparable to known methods which include various research efforts spanning
over multiple years. The obtained success rate exceeds 96% for a closed world
of 100 websites and 94% for our biggest closed world of 900 classes. In our
open world evaluation, the most performant deep learning model is 2% more
accurate than the state-of-the-art attack. Furthermore, we show that the
implicit features automatically learned by our approach are far more resilient
to dynamic changes of web content over time. We conclude that the ability to
automatically construct the most relevant traffic features and perform accurate
traffic recognition makes our deep learning based approach an efficient,
flexible and robust technique for website fingerprinting.Comment: To appear in the 25th Symposium on Network and Distributed System
Security (NDSS 2018
k-fingerprinting: a Robust Scalable Website Fingerprinting Technique
Website fingerprinting enables an attacker to infer which web page a client
is browsing through encrypted or anonymized network connections. We present a
new website fingerprinting technique based on random decision forests and
evaluate performance over standard web pages as well as Tor hidden services, on
a larger scale than previous works. Our technique, k-fingerprinting, performs
better than current state-of-the-art attacks even against website
fingerprinting defenses, and we show that it is possible to launch a website
fingerprinting attack in the face of a large amount of noisy data. We can
correctly determine which of 30 monitored hidden services a client is visiting
with 85% true positive rate (TPR), a false positive rate (FPR) as low as 0.02%,
from a world size of 100,000 unmonitored web pages. We further show that error
rates vary widely between web resources, and thus some patterns of use will be
predictably more vulnerable to attack than others.Comment: 17 page
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces
Website Fingerprinting (WF) is a type of traffic analysis attack that enables
a local passive eavesdropper to infer the victim's activity, even when the
traffic is protected by a VPN or an anonymity system like Tor. Leveraging a
deep-learning classifier, a WF attacker can gain over 98% accuracy on Tor
traffic. In this paper, we explore a novel defense, Mockingbird, based on the
idea of adversarial examples that have been shown to undermine machine-learning
classifiers in other domains. Since the attacker gets to design and train his
attack classifier based on the defense, we first demonstrate that at a
straightforward technique for generating adversarial-example based traces fails
to protect against an attacker using adversarial training for robust
classification. We then propose Mockingbird, a technique for generating traces
that resists adversarial training by moving randomly in the space of viable
traces and not following more predictable gradients. The technique drops the
accuracy of the state-of-the-art attack hardened with adversarial training from
98% to 42-58% while incurring only 58% bandwidth overhead. The attack accuracy
is generally lower than state-of-the-art defenses, and much lower when
considering Top-2 accuracy, while incurring lower bandwidth overheads.Comment: 18 pages, 13 figures and 8 Tables. Accepted in IEEE Transactions on
Information Forensics and Security (TIFS
- …