103,079 research outputs found
Parking Tickets for Privacy-Preserving Pay-by-Phone Parking
This document is a postprint version of the paper presented at the 18th Workshop
on Privacy in the Electronic Society (WPES’19), November 11, 2019, London (United
Kingdom).Traditionally, the payment required for parking in regulated areas
has been made through parking meters. In the last years, several
applications which allow to perform these payments using a mobile
device have appeared.
In this paper we propose a privacy-preserving pay-by-phone
parking system o ering the same privacy as the traditional paper-
based method even assuming an internal attacker with full access to
all the information managed by the system servers. Drivers'privacy
is preserved without requiring them to trust any party. Furthermore,
the system can tolerate that the mobile devices of drivers fall out of
coverage while their cars are parked
Invisible Pixels Are Dead, Long Live Invisible Pixels!
Privacy has deteriorated in the world wide web ever since the 1990s. The
tracking of browsing habits by different third-parties has been at the center
of this deterioration. Web cookies and so-called web beacons have been the
classical ways to implement third-party tracking. Due to the introduction of
more sophisticated technical tracking solutions and other fundamental
transformations, the use of classical image-based web beacons might be expected
to have lost their appeal. According to a sample of over thirty thousand images
collected from popular websites, this paper shows that such an assumption is a
fallacy: classical 1 x 1 images are still commonly used for third-party
tracking in the contemporary world wide web. While it seems that ad-blockers
are unable to fully block these classical image-based tracking beacons, the
paper further demonstrates that even limited information can be used to
accurately classify the third-party 1 x 1 images from other images. An average
classification accuracy of 0.956 is reached in the empirical experiment. With
these results the paper contributes to the ongoing attempts to better
understand the lack of privacy in the world wide web, and the means by which
the situation might be eventually improved.Comment: Forthcoming in the 17th Workshop on Privacy in the Electronic Society
(WPES 2018), Toronto, AC
On the Unicity of Smartphone Applications
Prior works have shown that the list of apps installed by a user reveal a lot
about user interests and behavior. These works rely on the semantics of the
installed apps and show that various user traits could be learnt automatically
using off-the-shelf machine-learning techniques. In this work, we focus on the
re-identifiability issue and thoroughly study the unicity of smartphone apps on
a dataset containing 54,893 Android users collected over a period of 7 months.
Our study finds that any 4 apps installed by a user are enough (more than 95%
times) for the re-identification of the user in our dataset. As the complete
list of installed apps is unique for 99% of the users in our dataset, it can be
easily used to track/profile the users by a service such as Twitter that has
access to the whole list of installed apps of users. As our analyzed dataset is
small as compared to the total population of Android users, we also study how
unicity would vary with larger datasets. This work emphasizes the need of
better privacy guards against collection, use and release of the list of
installed apps.Comment: 10 pages, 9 Figures, Appeared at ACM CCS Workshop on Privacy in
Electronic Society (WPES) 201
ClaimChain: Improving the Security and Privacy of In-band Key Distribution for Messaging
The social demand for email end-to-end encryption is barely supported by
mainstream service providers. Autocrypt is a new community-driven open
specification for e-mail encryption that attempts to respond to this demand. In
Autocrypt the encryption keys are attached directly to messages, and thus the
encryption can be implemented by email clients without any collaboration of the
providers. The decentralized nature of this in-band key distribution, however,
makes it prone to man-in-the-middle attacks and can leak the social graph of
users. To address this problem we introduce ClaimChain, a cryptographic
construction for privacy-preserving authentication of public keys. Users store
claims about their identities and keys, as well as their beliefs about others,
in ClaimChains. These chains form authenticated decentralized repositories that
enable users to prove the authenticity of both their keys and the keys of their
contacts. ClaimChains are encrypted, and therefore protect the stored
information, such as keys and contact identities, from prying eyes. At the same
time, ClaimChain implements mechanisms to provide strong non-equivocation
properties, discouraging malicious actors from distributing conflicting or
inauthentic claims. We implemented ClaimChain and we show that it offers
reasonable performance, low overhead, and authenticity guarantees.Comment: Appears in 2018 Workshop on Privacy in the Electronic Society
(WPES'18
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Training deep neural networks often forces users to work in a distributed or
outsourced setting, accompanied with privacy concerns. Split learning aims to
address this concern by distributing the model among a client and a server. The
scheme supposedly provides privacy, since the server cannot see the clients'
models and inputs. We show that this is not true via two novel attacks. (1) We
show that an honest-but-curious split learning server, equipped only with the
knowledge of the client neural network architecture, can recover the input
samples and obtain a functionally similar model to the client model, without
being detected. (2) We show that if the client keeps hidden only the output
layer of the model to "protect" the private labels, the honest-but-curious
server can infer the labels with perfect accuracy. We test our attacks using
various benchmark datasets and against proposed privacy-enhancing extensions to
split learning. Our results show that plaintext split learning can pose serious
risks, ranging from data (input) privacy to intellectual property (model
parameters), and provide no more than a false sense of security.Comment: Proceedings of the 21st Workshop on Privacy in the Electronic Society
(WPES '22), November 7, 2022, Los Angeles, CA, US
Privacy Vulnerabilities in the Practices of Repairing Broken Digital Artifacts in Bangladesh
This paper presents a study on the privacy concerns associated with the practice of repairing broken digital objects in Bangladesh. Historically, repair of old or broken technologies has received less attention in ICTD scholarship than design, development, or use. As a result, the potential privacy risks associated with repair practices have remained mostly unaddressed. This paper describes our three-month long ethnographic study that took place at ten major repair sites in Dhaka, Bangladesh. We show a variety of ways in which the privacy of an individual’s personal data may be compromised during the repair process. We also examine people’s perceptions around privacy in repair, and its connections with their broader social and cultural values. Finally, we discuss the challenges and opportunities for future research to strengthen the repair ecosystem in developing countries. Taken together, our findings contribute to the growing discourse around post-use cycles of technology
Privacy in an Ambient World
Privacy is a prime concern in today's information society. To protect\ud
the privacy of individuals, enterprises must follow certain privacy practices, while\ud
collecting or processing personal data. In this chapter we look at the setting where an\ud
enterprise collects private data on its website, processes it inside the enterprise and\ud
shares it with partner enterprises. In particular, we analyse three different privacy\ud
systems that can be used in the different stages of this lifecycle. One of them is the\ud
Audit Logic, recently introduced, which can be used to keep data private when it\ud
travels across enterprise boundaries. We conclude with an analysis of the features\ud
and shortcomings of these systems
- …