39 research outputs found
Invisible Pixels Are Dead, Long Live Invisible Pixels!
Privacy has deteriorated in the world wide web ever since the 1990s. The
tracking of browsing habits by different third-parties has been at the center
of this deterioration. Web cookies and so-called web beacons have been the
classical ways to implement third-party tracking. Due to the introduction of
more sophisticated technical tracking solutions and other fundamental
transformations, the use of classical image-based web beacons might be expected
to have lost their appeal. According to a sample of over thirty thousand images
collected from popular websites, this paper shows that such an assumption is a
fallacy: classical 1 x 1 images are still commonly used for third-party
tracking in the contemporary world wide web. While it seems that ad-blockers
are unable to fully block these classical image-based tracking beacons, the
paper further demonstrates that even limited information can be used to
accurately classify the third-party 1 x 1 images from other images. An average
classification accuracy of 0.956 is reached in the empirical experiment. With
these results the paper contributes to the ongoing attempts to better
understand the lack of privacy in the world wide web, and the means by which
the situation might be eventually improved.Comment: Forthcoming in the 17th Workshop on Privacy in the Electronic Society
(WPES 2018), Toronto, AC
Measuring Information Leakage in Website Fingerprinting Attacks and Defenses
Tor provides low-latency anonymous and uncensored network access against a
local or network adversary. Due to the design choice to minimize traffic
overhead (and increase the pool of potential users) Tor allows some information
about the client's connections to leak. Attacks using (features extracted from)
this information to infer the website a user visits are called Website
Fingerprinting (WF) attacks. We develop a methodology and tools to measure the
amount of leaked information about a website. We apply this tool to a
comprehensive set of features extracted from a large set of websites and WF
defense mechanisms, allowing us to make more fine-grained observations about WF
attacks and defenses.Comment: In Proceedings of the 2018 ACM SIGSAC Conference on Computer and
Communications Security (CCS '18
Hypothesis Testing Interpretations and Renyi Differential Privacy
Differential privacy is a de facto standard in data privacy, with
applications in the public and private sectors. A way to explain differential
privacy, which is particularly appealing to statistician and social scientists
is by means of its statistical hypothesis testing interpretation. Informally,
one cannot effectively test whether a specific individual has contributed her
data by observing the output of a private mechanism---any test cannot have both
high significance and high power.
In this paper, we identify some conditions under which a privacy definition
given in terms of a statistical divergence satisfies a similar interpretation.
These conditions are useful to analyze the distinguishability power of
divergences and we use them to study the hypothesis testing interpretation of
some relaxations of differential privacy based on Renyi divergence. This
analysis also results in an improved conversion rule between these definitions
and differential privacy
Non-Binding (Designated Verifier) Signature
We argue that there are some scenarios in which
plausible deniability might be desired for a digital signature
scheme. For instance, the non-repudiation property of conventional
signature schemes is problematic in designing an Instant
Messaging system (WPES 2004). In this paper, we formally
define a non-binding signature scheme in which the Signer
is able to disavow her own signature if she wants, but, the
Verifier is not able to dispute a signature generated by the
Signer. That is, the Signer is able to convince a third party
Judge that she is the owner of a signature without disclosing
her secret information. We propose a signature scheme that
is non-binding and unforgeable. Our signature scheme is post-quantum secure if the underlying cryptographic primitives are post-quantum secure. In addition, a modification to our nonbinding
signature scheme leads to an Instant Messaging system
that is of independent interest
Low-latency mix networks for anonymous communication
Every modern online application relies on the network layer to transfer information, which exposes the metadata associated with digital communication. These distinctive characteristics encapsulate equally meaningful information as the content of the communication itself and allow eavesdroppers to uniquely identify users and their activities. Hence, by exposing the IP addresses and by analyzing patterns of the network traffic, a malicious entity can deanonymize most online communications. While content confidentiality has made significant progress over the years, existing solutions for anonymous communication which protect the network metadata still have severe limitations, including centralization, limited security, poor scalability, and high-latency. As the importance of online privacy increases, the need to build low-latency communication systems with strong security guarantees becomes necessary. Therefore, in this thesis, we address the problem of building multi-purpose anonymous networks that protect communication privacy. To this end, we design a novel mix network Loopix, which guarantees communication unlinkability and supports applications with various latency and bandwidth constraints. Loopix offers better security properties than any existing solution for anonymous communications while at the same time being scalable and low-latency. Furthermore, we also explore the problem of active attacks and malicious infrastructure nodes, and propose a Miranda mechanism which allows to efficiently mitigate them. In the second part of this thesis, we show that mix networks may be used as a building block in the design of a private notification system, which enables fast and low-cost online notifications. Moreover, its privacy properties benefit from an increasing number of users, meaning that the system can scale to millions of clients at a lower cost than any alternative solution
Recommended from our members
Design and Implementation of Algorithms for Traffic Classification
Traffic analysis is the practice of using inherent characteristics of a network flow such as timings, sizes, and orderings of the packets to derive sensitive information about it. Traffic analysis techniques are used because of the extensive adoption of encryption and content-obfuscation mechanisms, making it impossible to infer any information about the flows by analyzing their content. In this thesis, we use traffic analysis to infer sensitive information for different objectives and different applications. Specifically, we investigate various applications: p2p cryptocurrencies, flow correlation, and messaging applications. Our goal is to tailor specific traffic analysis algorithms that best capture network traffic’s intrinsic characteristics in those applications for each of these applications. Also, the objective of traffic analysis is different for each of these applications. Specifically, in Bitcoin, our goal is to evaluate Bitcoin traffic’s resilience to blocking by powerful entities such as governments and ISPs. Bitcoin and similar cryptocurrencies play an important role in electronic commerce and other trust-based distributed systems because of their significant advantage over traditional currencies, including open access to global e-commerce. Therefore, it is essential to
the consumers and the industry to have reliable access to their Bitcoin assets. We also examine stepping stone attacks for flow correlation. A stepping stone is a host that an attacker uses to relay her traffic to hide her identity. We introduce two fingerprinting systems, TagIt and FINN. TagIt embeds a secret fingerprint into the flows by moving the packets to specific time intervals. However, FINN utilizes DNNs to embed the fingerprint by changing the inter-packet delays (IPDs) in the flow. In messaging applications, we analyze the WhatsApp messaging service to determine if traffic leaks any sensitive information such as members’ identity in a particular conversation to the adversaries who watch their encrypted traffic. These messaging applications’ privacy is essential because these services provide an environment to dis- cuss politically sensitive subjects, making them a target to government surveillance and censorship in totalitarian countries. We take two technical approaches to design our traffic analysis techniques. The increasing use of DNN-based classifiers inspires our first direction: we train DNN classifiers to perform some specific traffic analysis task. Our second approach is to inspect and model the shape of traffic in the target application and design a statistical classifier for the expected shape of traffic. DNN- based methods are useful when the network is complex, and the traffic’s underlying noise is not linear. Also, these models do not need a meticulous analysis to extract the features. However, deep learning techniques need a vast amount of training data to work well. Therefore, they are not beneficial when there is insufficient data avail- able to train a generalized model. On the other hand, statistical methods have the advantage that they do not have training overhead
Privacy in the Smart City - Applications, Technologies, Challenges and Solutions
Many modern cities strive to integrate information technology into every aspect of city life to create so-called smart cities. Smart cities rely on a large number of application areas and technologies to realize complex interactions between citizens, third parties, and city departments. This overwhelming complexity is one reason why holistic privacy protection only rarely enters the picture. A lack of privacy can result in discrimination and social sorting, creating a fundamentally unequal society. To prevent this, we believe that a better understanding of smart cities and their privacy implications is needed. We therefore systematize the application areas, enabling technologies, privacy types, attackers and data sources for the attacks, giving structure to the fuzzy term “smart city”. Based on our taxonomies, we describe existing privacy-enhancing technologies, review the state of the art in real cities around the world, and discuss promising future research directions. Our survey can serve as a reference guide, contributing to the development of privacy-friendly smart cities