1,124 research outputs found
A Systematic Approach to Developing and Evaluating Website Fingerprinting Defenses
Fingerprinting attacks have emerged as a serious threat against pri-vacy mechanisms, such as SSL, Tor, and encrypting tunnels. Re-searchers have proposed numerous attacks and defenses, and the Tor project now includes both network- and browser-level defenses against these attacks, but published defenses have high overhead, poor security, or both. This paper (1) systematically analyzes existing attacks and de-fenses to understand which traffic features convey the most infor-mation (and therefore are most important for defenses to hide), (2) proves lower bounds on the bandwidth costs of any defense that achieves a given level of security, (3) presents a mathematical framework for evaluating performance of fingerprinting attacks and defenses in the open-world, given their closed-world performance, and (4) presents a new defense, Tamaraw, that achieves a better se-curity/bandwidth trade-off than any previously proposed defense. Our feature-based analysis provides clear directions to defense designers on which features need to be hidden. Our lower bounds on bandwidth costs help us understand the limits of fingerprint-ing defenses and to determine how close we are to “success”. Our open-world/close-world connection enables researchers to perform simpler closed-world experiments and predict open-world perfor-mance. Tamaraw provides an “existence proof ” for efficient, secure defenses
Automated Website Fingerprinting through Deep Learning
Several studies have shown that the network traffic that is generated by a
visit to a website over Tor reveals information specific to the website through
the timing and sizes of network packets. By capturing traffic traces between
users and their Tor entry guard, a network eavesdropper can leverage this
meta-data to reveal which website Tor users are visiting. The success of such
attacks heavily depends on the particular set of traffic features that are used
to construct the fingerprint. Typically, these features are manually engineered
and, as such, any change introduced to the Tor network can render these
carefully constructed features ineffective. In this paper, we show that an
adversary can automate the feature engineering process, and thus automatically
deanonymize Tor traffic by applying our novel method based on deep learning. We
collect a dataset comprised of more than three million network traces, which is
the largest dataset of web traffic ever used for website fingerprinting, and
find that the performance achieved by our deep learning approaches is
comparable to known methods which include various research efforts spanning
over multiple years. The obtained success rate exceeds 96% for a closed world
of 100 websites and 94% for our biggest closed world of 900 classes. In our
open world evaluation, the most performant deep learning model is 2% more
accurate than the state-of-the-art attack. Furthermore, we show that the
implicit features automatically learned by our approach are far more resilient
to dynamic changes of web content over time. We conclude that the ability to
automatically construct the most relevant traffic features and perform accurate
traffic recognition makes our deep learning based approach an efficient,
flexible and robust technique for website fingerprinting.Comment: To appear in the 25th Symposium on Network and Distributed System
Security (NDSS 2018
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces
Website Fingerprinting (WF) is a type of traffic analysis attack that enables
a local passive eavesdropper to infer the victim's activity, even when the
traffic is protected by a VPN or an anonymity system like Tor. Leveraging a
deep-learning classifier, a WF attacker can gain over 98% accuracy on Tor
traffic. In this paper, we explore a novel defense, Mockingbird, based on the
idea of adversarial examples that have been shown to undermine machine-learning
classifiers in other domains. Since the attacker gets to design and train his
attack classifier based on the defense, we first demonstrate that at a
straightforward technique for generating adversarial-example based traces fails
to protect against an attacker using adversarial training for robust
classification. We then propose Mockingbird, a technique for generating traces
that resists adversarial training by moving randomly in the space of viable
traces and not following more predictable gradients. The technique drops the
accuracy of the state-of-the-art attack hardened with adversarial training from
98% to 42-58% while incurring only 58% bandwidth overhead. The attack accuracy
is generally lower than state-of-the-art defenses, and much lower when
considering Top-2 accuracy, while incurring lower bandwidth overheads.Comment: 18 pages, 13 figures and 8 Tables. Accepted in IEEE Transactions on
Information Forensics and Security (TIFS
Measuring Information Leakage in Website Fingerprinting Attacks and Defenses
Tor provides low-latency anonymous and uncensored network access against a
local or network adversary. Due to the design choice to minimize traffic
overhead (and increase the pool of potential users) Tor allows some information
about the client's connections to leak. Attacks using (features extracted from)
this information to infer the website a user visits are called Website
Fingerprinting (WF) attacks. We develop a methodology and tools to measure the
amount of leaked information about a website. We apply this tool to a
comprehensive set of features extracted from a large set of websites and WF
defense mechanisms, allowing us to make more fine-grained observations about WF
attacks and defenses.Comment: In Proceedings of the 2018 ACM SIGSAC Conference on Computer and
Communications Security (CCS '18
How Unique is Your .onion? An Analysis of the Fingerprintability of Tor Onion Services
Recent studies have shown that Tor onion (hidden) service websites are
particularly vulnerable to website fingerprinting attacks due to their limited
number and sensitive nature. In this work we present a multi-level feature
analysis of onion site fingerprintability, considering three state-of-the-art
website fingerprinting methods and 482 Tor onion services, making this the
largest analysis of this kind completed on onion services to date.
Prior studies typically report average performance results for a given
website fingerprinting method or countermeasure. We investigate which sites are
more or less vulnerable to fingerprinting and which features make them so. We
find that there is a high variability in the rate at which sites are classified
(and misclassified) by these attacks, implying that average performance figures
may not be informative of the risks that website fingerprinting attacks pose to
particular sites.
We analyze the features exploited by the different website fingerprinting
methods and discuss what makes onion service sites more or less easily
identifiable, both in terms of their traffic traces as well as their webpage
design. We study misclassifications to understand how onion service sites can
be redesigned to be less vulnerable to website fingerprinting attacks. Our
results also inform the design of website fingerprinting countermeasures and
their evaluation considering disparate impact across sites.Comment: Accepted by ACM CCS 201
- …