747 research outputs found
Systematizing Genome Privacy Research: A Privacy-Enhancing Technologies Perspective
Rapid advances in human genomics are enabling researchers to gain a better
understanding of the role of the genome in our health and well-being,
stimulating hope for more effective and cost efficient healthcare. However,
this also prompts a number of security and privacy concerns stemming from the
distinctive characteristics of genomic data. To address them, a new research
community has emerged and produced a large number of publications and
initiatives.
In this paper, we rely on a structured methodology to contextualize and
provide a critical analysis of the current knowledge on privacy-enhancing
technologies used for testing, storing, and sharing genomic data, using a
representative sample of the work published in the past decade. We identify and
discuss limitations, technical challenges, and issues faced by the community,
focusing in particular on those that are inherently tied to the nature of the
problem and are harder for the community alone to address. Finally, we report
on the importance and difficulty of the identified challenges based on an
online survey of genome data privacy expertsComment: To appear in the Proceedings on Privacy Enhancing Technologies
(PoPETs), Vol. 2019, Issue
StyleID: Identity Disentanglement for Anonymizing Faces
Privacy of machine learning models is one of the remaining challenges that
hinder the broad adoption of Artificial Intelligent (AI). This paper considers
this problem in the context of image datasets containing faces. Anonymization
of such datasets is becoming increasingly important due to their central role
in the training of autonomous cars, for example, and the vast amount of data
generated by surveillance systems. While most prior work de-identifies facial
images by modifying identity features in pixel space, we instead project the
image onto the latent space of a Generative Adversarial Network (GAN) model,
find the features that provide the biggest identity disentanglement, and then
manipulate these features in latent space, pixel space, or both. The main
contribution of the paper is the design of a feature-preserving anonymization
framework, StyleID, which protects the individuals' identity, while preserving
as many characteristics of the original faces in the image dataset as possible.
As part of the contribution, we present a novel disentanglement metric, three
complementing disentanglement methods, and new insights into identity
disentanglement. StyleID provides tunable privacy, has low computational
complexity, and is shown to outperform current state-of-the-art solutions.Comment: Accepted to Privacy Enhancing Technologies Symposium (PETS), July
2023. Will appear in Proceedings on Privacy Enhancing Technologies (PoPETs),
volume 1, 2023. 15 pages including references and appendix, 16 figures, 5
table
Systemization of Pluggable Transports for Censorship Resistance
An increasing number of countries implement Internet censorship at different
scales and for a variety of reasons. In particular, the link between the
censored client and entry point to the uncensored network is a frequent target
of censorship due to the ease with which a nation-state censor can control it.
A number of censorship resistance systems have been developed thus far to help
circumvent blocking on this link, which we refer to as link circumvention
systems (LCs). The variety and profusion of attack vectors available to a
censor has led to an arms race, leading to a dramatic speed of evolution of
LCs. Despite their inherent complexity and the breadth of work in this area,
there is no systematic way to evaluate link circumvention systems and compare
them against each other. In this paper, we (i) sketch an attack model to
comprehensively explore a censor's capabilities, (ii) present an abstract model
of a LC, a system that helps a censored client communicate with a server over
the Internet while resisting censorship, (iii) describe an evaluation stack
that underscores a layered approach to evaluate LCs, and (iv) systemize and
evaluate existing censorship resistance systems that provide link
circumvention. We highlight open challenges in the evaluation and development
of LCs and discuss possible mitigations.Comment: Content from this paper was published in Proceedings on Privacy
Enhancing Technologies (PoPETS), Volume 2016, Issue 4 (July 2016) as "SoK:
Making Sense of Censorship Resistance Systems" by Sheharbano Khattak, Tariq
Elahi, Laurent Simon, Colleen M. Swanson, Steven J. Murdoch and Ian Goldberg
(DOI 10.1515/popets-2016-0028
DeTorrent: An Adversarial Padding-only Traffic Analysis Defense
While anonymity networks like Tor aim to protect the privacy of their users,
they are vulnerable to traffic analysis attacks such as Website Fingerprinting
(WF) and Flow Correlation (FC). Recent implementations of WF and FC attacks,
such as Tik-Tok and DeepCoFFEA, have shown that the attacks can be effectively
carried out, threatening user privacy. Consequently, there is a need for
effective traffic analysis defense.
There are a variety of existing defenses, but most are either ineffective,
incur high latency and bandwidth overhead, or require additional
infrastructure. As a result, we aim to design a traffic analysis defense that
is efficient and highly resistant to both WF and FC attacks. We propose
DeTorrent, which uses competing neural networks to generate and evaluate
traffic analysis defenses that insert 'dummy' traffic into real traffic flows.
DeTorrent operates with moderate overhead and without delaying traffic. In a
closed-world WF setting, it reduces an attacker's accuracy by 61.5%, a
reduction 10.5% better than the next-best padding-only defense. Against the
state-of-the-art FC attacker, DeTorrent reduces the true positive rate for a
false positive rate to about .12, which is less than half that of the
next-best defense. We also demonstrate DeTorrent's practicality by deploying it
alongside the Tor network and find that it maintains its performance when
applied to live traffic.Comment: Accepted to the 24th Privacy Enhancing Technologies Symposium (PETS
2024
- …