35,006 research outputs found
Rethinking Location Privacy for Unknown Mobility Behaviors
Location Privacy-Preserving Mechanisms (LPPMs) in the literature largely
consider that users' data available for training wholly characterizes their
mobility patterns. Thus, they hardwire this information in their designs and
evaluate their privacy properties with these same data. In this paper, we aim
to understand the impact of this decision on the level of privacy these LPPMs
may offer in real life when the users' mobility data may be different from the
data used in the design phase. Our results show that, in many cases, training
data does not capture users' behavior accurately and, thus, the level of
privacy provided by the LPPM is often overestimated. To address this gap
between theory and practice, we propose to use blank-slate models for LPPM
design. Contrary to the hardwired approach, that assumes known users' behavior,
blank-slate models learn the users' behavior from the queries to the service
provider. We leverage this blank-slate approach to develop a new family of
LPPMs, that we call Profile Estimation-Based LPPMs. Using real data, we
empirically show that our proposal outperforms optimal state-of-the-art
mechanisms designed on sporadic hardwired models. On non-sporadic location
privacy scenarios, our method is only better if the usage of the location
privacy service is not continuous. It is our hope that eliminating the need to
bootstrap the mechanisms with training data and ensuring that the mechanisms
are lightweight and easy to compute help fostering the integration of location
privacy protections in deployed systems
Privacy through uncertainty in location-based services
Location-Based Services (LBS) are becoming more prevalent. While there are many benefits, there are also real privacy risks. People are unwilling to give up the benefits - but can we reduce privacy risks without giving up on LBS entirely?
This paper explores the possibility of introducing uncertainty into location information when using an LBS, so as to reduce privacy risk while maintaining good quality of service. This paper also explores the current uses of uncertainty information in a selection of mobile applications
Emerging privacy challenges and approaches in CAV systems
The growth of Internet-connected devices, Internet-enabled services and Internet of Things systems continues at a rapid pace, and their application to transport systems is heralded as game-changing. Numerous developing CAV (Connected and Autonomous Vehicle) functions, such as traffic planning, optimisation, management, safety-critical and cooperative autonomous driving applications, rely on data from various sources. The efficacy of these functions is highly dependent on the dimensionality, amount and accuracy of the data being shared. It holds, in general, that the greater the amount of data available, the greater the efficacy of the function. However, much of this data is privacy-sensitive, including personal, commercial and research data. Location data and its correlation with identity and temporal data can help infer other personal information, such as home/work locations, age, job, behavioural features, habits, social relationships. This work categorises the emerging privacy challenges and solutions for CAV systems and identifies the knowledge gap for future research, which will minimise and mitigate privacy concerns without hampering the efficacy of the functions
Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments
Decentralized systems are a subset of distributed systems where multiple
authorities control different components and no authority is fully trusted by
all. This implies that any component in a decentralized system is potentially
adversarial. We revise fifteen years of research on decentralization and
privacy, and provide an overview of key systems, as well as key insights for
designers of future systems. We show that decentralized designs can enhance
privacy, integrity, and availability but also require careful trade-offs in
terms of system complexity, properties provided, and degree of
decentralization. These trade-offs need to be understood and navigated by
designers. We argue that a combination of insights from cryptography,
distributed systems, and mechanism design, aligned with the development of
adequate incentives, are necessary to build scalable and successful
privacy-preserving decentralized systems
- âŠ