3,046 research outputs found
How Unique is Your .onion? An Analysis of the Fingerprintability of Tor Onion Services
Recent studies have shown that Tor onion (hidden) service websites are
particularly vulnerable to website fingerprinting attacks due to their limited
number and sensitive nature. In this work we present a multi-level feature
analysis of onion site fingerprintability, considering three state-of-the-art
website fingerprinting methods and 482 Tor onion services, making this the
largest analysis of this kind completed on onion services to date.
Prior studies typically report average performance results for a given
website fingerprinting method or countermeasure. We investigate which sites are
more or less vulnerable to fingerprinting and which features make them so. We
find that there is a high variability in the rate at which sites are classified
(and misclassified) by these attacks, implying that average performance figures
may not be informative of the risks that website fingerprinting attacks pose to
particular sites.
We analyze the features exploited by the different website fingerprinting
methods and discuss what makes onion service sites more or less easily
identifiable, both in terms of their traffic traces as well as their webpage
design. We study misclassifications to understand how onion service sites can
be redesigned to be less vulnerable to website fingerprinting attacks. Our
results also inform the design of website fingerprinting countermeasures and
their evaluation considering disparate impact across sites.Comment: Accepted by ACM CCS 201
k-fingerprinting: a Robust Scalable Website Fingerprinting Technique
Website fingerprinting enables an attacker to infer which web page a client
is browsing through encrypted or anonymized network connections. We present a
new website fingerprinting technique based on random decision forests and
evaluate performance over standard web pages as well as Tor hidden services, on
a larger scale than previous works. Our technique, k-fingerprinting, performs
better than current state-of-the-art attacks even against website
fingerprinting defenses, and we show that it is possible to launch a website
fingerprinting attack in the face of a large amount of noisy data. We can
correctly determine which of 30 monitored hidden services a client is visiting
with 85% true positive rate (TPR), a false positive rate (FPR) as low as 0.02%,
from a world size of 100,000 unmonitored web pages. We further show that error
rates vary widely between web resources, and thus some patterns of use will be
predictably more vulnerable to attack than others.Comment: 17 page
Measuring Information Leakage in Website Fingerprinting Attacks and Defenses
Tor provides low-latency anonymous and uncensored network access against a
local or network adversary. Due to the design choice to minimize traffic
overhead (and increase the pool of potential users) Tor allows some information
about the client's connections to leak. Attacks using (features extracted from)
this information to infer the website a user visits are called Website
Fingerprinting (WF) attacks. We develop a methodology and tools to measure the
amount of leaked information about a website. We apply this tool to a
comprehensive set of features extracted from a large set of websites and WF
defense mechanisms, allowing us to make more fine-grained observations about WF
attacks and defenses.Comment: In Proceedings of the 2018 ACM SIGSAC Conference on Computer and
Communications Security (CCS '18
Automated Website Fingerprinting through Deep Learning
Several studies have shown that the network traffic that is generated by a
visit to a website over Tor reveals information specific to the website through
the timing and sizes of network packets. By capturing traffic traces between
users and their Tor entry guard, a network eavesdropper can leverage this
meta-data to reveal which website Tor users are visiting. The success of such
attacks heavily depends on the particular set of traffic features that are used
to construct the fingerprint. Typically, these features are manually engineered
and, as such, any change introduced to the Tor network can render these
carefully constructed features ineffective. In this paper, we show that an
adversary can automate the feature engineering process, and thus automatically
deanonymize Tor traffic by applying our novel method based on deep learning. We
collect a dataset comprised of more than three million network traces, which is
the largest dataset of web traffic ever used for website fingerprinting, and
find that the performance achieved by our deep learning approaches is
comparable to known methods which include various research efforts spanning
over multiple years. The obtained success rate exceeds 96% for a closed world
of 100 websites and 94% for our biggest closed world of 900 classes. In our
open world evaluation, the most performant deep learning model is 2% more
accurate than the state-of-the-art attack. Furthermore, we show that the
implicit features automatically learned by our approach are far more resilient
to dynamic changes of web content over time. We conclude that the ability to
automatically construct the most relevant traffic features and perform accurate
traffic recognition makes our deep learning based approach an efficient,
flexible and robust technique for website fingerprinting.Comment: To appear in the 25th Symposium on Network and Distributed System
Security (NDSS 2018
Web Tracking: Mechanisms, Implications, and Defenses
This articles surveys the existing literature on the methods currently used
by web services to track the user online as well as their purposes,
implications, and possible user's defenses. A significant majority of reviewed
articles and web resources are from years 2012-2014. Privacy seems to be the
Achilles' heel of today's web. Web services make continuous efforts to obtain
as much information as they can about the things we search, the sites we visit,
the people with who we contact, and the products we buy. Tracking is usually
performed for commercial purposes. We present 5 main groups of methods used for
user tracking, which are based on sessions, client storage, client cache,
fingerprinting, or yet other approaches. A special focus is placed on
mechanisms that use web caches, operational caches, and fingerprinting, as they
are usually very rich in terms of using various creative methodologies. We also
show how the users can be identified on the web and associated with their real
names, e-mail addresses, phone numbers, or even street addresses. We show why
tracking is being used and its possible implications for the users (price
discrimination, assessing financial credibility, determining insurance
coverage, government surveillance, and identity theft). For each of the
tracking methods, we present possible defenses. Apart from describing the
methods and tools used for keeping the personal data away from being tracked,
we also present several tools that were used for research purposes - their main
goal is to discover how and by which entity the users are being tracked on
their desktop computers or smartphones, provide this information to the users,
and visualize it in an accessible and easy to follow way. Finally, we present
the currently proposed future approaches to track the user and show that they
can potentially pose significant threats to the users' privacy.Comment: 29 pages, 212 reference
PerfWeb: How to Violate Web Privacy with Hardware Performance Events
The browser history reveals highly sensitive information about users, such as
financial status, health conditions, or political views. Private browsing modes
and anonymity networks are consequently important tools to preserve the privacy
not only of regular users but in particular of whistleblowers and dissidents.
Yet, in this work we show how a malicious application can infer opened websites
from Google Chrome in Incognito mode and from Tor Browser by exploiting
hardware performance events (HPEs). In particular, we analyze the browsers'
microarchitectural footprint with the help of advanced Machine Learning
techniques: k-th Nearest Neighbors, Decision Trees, Support Vector Machines,
and in contrast to previous literature also Convolutional Neural Networks. We
profile 40 different websites, 30 of the top Alexa sites and 10 whistleblowing
portals, on two machines featuring an Intel and an ARM processor. By monitoring
retired instructions, cache accesses, and bus cycles for at most 5 seconds, we
manage to classify the selected websites with a success rate of up to 86.3%.
The results show that hardware performance events can clearly undermine the
privacy of web users. We therefore propose mitigation strategies that impede
our attacks and still allow legitimate use of HPEs
- …