138 research outputs found
Unlinkable content playbacks in a multiparty DRM system
We present a solution to the problem of privacy invasion in a multiparty
digital rights management scheme. (Roaming) users buy content licenses from a
content provider and execute it at any nearby content distributor. Our
approach, which does not need any trusted third party--in contrast to most
related work on privacy-preserving DRM--is based on a re-encryption scheme that
runs on any mobile Android device. Only a minor security-critical part needs to
be performed on the device's smartcard which could, for instance, be a SIM
card
Local Suppression and Splitting Techniques for Privacy Preserving Publication of Trajectories
postprin
Passengers information in public transport and privacy: Can anonymous tickets prevent tracking?
Abstract Modern public transportation companies often record large amounts of information. Privacy can be safeguarded by discarding nominal tickets, or introducing anonymization techniques. But is anonymity at all possible when everything is recorded? In this paper we discuss travel information management in the public transport scenario and we present a revealing case study (relative to the city of Cesena, Italy), showing that even anonymous 10-ride bus tickets may betray a user's privacy expectations. We also propose a number of recommendations for the design and management of public transport information systems, aimed at preserving the usersâ privacy, while retaining the useful analysis features enabled by the e-ticketing technology
Privacy in the Genomic Era
Genome sequencing technology has advanced at a rapid pace and it is now
possible to generate highly-detailed genotypes inexpensively. The collection
and analysis of such data has the potential to support various applications,
including personalized medical services. While the benefits of the genomics
revolution are trumpeted by the biomedical community, the increased
availability of such data has major implications for personal privacy; notably
because the genome has certain essential features, which include (but are not
limited to) (i) an association with traits and certain diseases, (ii)
identification capability (e.g., forensics), and (iii) revelation of family
relationships. Moreover, direct-to-consumer DNA testing increases the
likelihood that genome data will be made available in less regulated
environments, such as the Internet and for-profit companies. The problem of
genome data privacy thus resides at the crossroads of computer science,
medicine, and public policy. While the computer scientists have addressed data
privacy for various data types, there has been less attention dedicated to
genomic data. Thus, the goal of this paper is to provide a systematization of
knowledge for the computer science community. In doing so, we address some of
the (sometimes erroneous) beliefs of this field and we report on a survey we
conducted about genome data privacy with biomedical specialists. Then, after
characterizing the genome privacy problem, we review the state-of-the-art
regarding privacy attacks on genomic data and strategies for mitigating such
attacks, as well as contextualizing these attacks from the perspective of
medicine and public policy. This paper concludes with an enumeration of the
challenges for genome data privacy and presents a framework to systematize the
analysis of threats and the design of countermeasures as the field moves
forward
Secure and privacy-respecting documentation for interactive manufacturing and quality assurance
The automated documentation of work steps is a requirement of many modern manufacturing processes. Especially when it comes to important procedures such as safety critical screw connections or weld seams, the correct and complete execution of certain manufacturing steps needs to be properly supervised, e.g., by capturing video snippets of the worker to be checked in hindsight. Without proper technical and organizational safeguards, such documentation data carries the potential for covert performance monitoring to the disadvantage of employees. NaĂŻve documentation architectures interfere with data protection requirements, and thus cannot expect acceptance of employees. In this paper we outline use cases for automated documentation and describe an exemplary system architecture of a workflow recognition and documentation system. We derive privacy protection goals that we address with a suitable security architecture based on hybrid encryption, secret-sharing among multiple parties and remote attestation of the system to prevent manipulation. We finally contribute an outlook towards problems and possible solutions with regards to information that can leak through accessible metadata and with regard to more modular system architectures, where more sophisticated remote attestation approaches are needed to ensure the integrity of distributed components
A novel policy-driven reversible anonymisation scheme for XML-based services
Author's version of an article in the journal: Information Systems. Also available from the publisher at: http://dx.doi.org/10.1016/j.is.2014.05.007This paper proposes a reversible anonymisation scheme for XML messages that supports fine-grained enforcement of XACML-based privacy policies. Reversible anonymisation means that information in XML messages is anonymised, however the information required to reverse the anonymisation is cryptographically protected in the messages. The policy can control access down to octet ranges of individual elements or attributes in XML messages. The reversible anonymisation protocol effectively implements a multi-level privacy and security based approach, so that only authorised stakeholders can disclose confidential information up to the privacy or security level they are authorised for. The approach furthermore supports a shared secret based scheme, where stakeholders need to agree to disclose confidential information. Last, it supports time limited access to private or confidential information. This opens up for improved control of access to private or confidential information in XML messages used by a service oriented architecture. The solution provides horizontally scalable confidentiality protection for certain types of big data applications, like XML databases, secure logging and data retention repositories
Extending Helios Towards Private Eligibility Verifiability
We show how to extend the Helios voting system to provide eligibility verifiability without revealing who voted which we call private eligibility verifiability. The main idea is that real votes are hidden in a crowd of null votes that are cast by others but are indistinguishable from those of the eligible voter. This extended Helios scheme also improves Helios towards receipt-freeness
Privacy-Friendly Mobility Analytics using Aggregate Location Data
Location data can be extremely useful to study commuting patterns and
disruptions, as well as to predict real-time traffic volumes. At the same time,
however, the fine-grained collection of user locations raises serious privacy
concerns, as this can reveal sensitive information about the users, such as,
life style, political and religious inclinations, or even identities. In this
paper, we study the feasibility of crowd-sourced mobility analytics over
aggregate location information: users periodically report their location, using
a privacy-preserving aggregation protocol, so that the server can only recover
aggregates -- i.e., how many, but not which, users are in a region at a given
time. We experiment with real-world mobility datasets obtained from the
Transport For London authority and the San Francisco Cabs network, and present
a novel methodology based on time series modeling that is geared to forecast
traffic volumes in regions of interest and to detect mobility anomalies in
them. In the presence of anomalies, we also make enhanced traffic volume
predictions by feeding our model with additional information from correlated
regions. Finally, we present and evaluate a mobile app prototype, called
Mobility Data Donors (MDD), in terms of computation, communication, and energy
overhead, demonstrating the real-world deployability of our techniques.Comment: Published at ACM SIGSPATIAL 201
Understanding the protection of privacy when counting subway travelers through anonymization
Public transportation, especially in large cities, is critical for livability. Counting passengers as they travel between stations is crucial to establishing and maintaining effective transportation systems. Various information and communication technologies, such as GPS, Bluetooth, and Wi-Fi, have been used to measure people's movements automatically. Regarding public transportation applications, the automated fare collection (AFC) system has been widely adopted as a convenient method for measuring passengers, mainly because it is relatively easy to identify card owners uniquely and, as such, the movements of their card holders. However, there are serious concerns regarding privacy infringements when deploying such technologies, to the extent that Europe's General Data Protection Regulation has forbidden straightforward deployment for measuring pedestrian dynamics unless explicit consent has been provided. As a result, privacy-preservation techniques (e.g., anonymization) must be used when deploying such systems. Against this backdrop, we investigate to what extent a recently developed anonymization technique, known as detection k-anonymity, can be adapted to count public transportation travelers while preserving privacy. In the case study, we tested our methods with data from Beijing subway trips. Results show different scenarios when detection k-anonymity can be effectively applied and when it cannot. Due to the complicated relationship between the detection k-anonymity parameters, setting the proper parameter values can be difficult, leading to inaccurate results. Furthermore, through detection k-anonymity, it is possible to count travelers between two locations with high accuracy. However, counting travelers from more than two locations leads to more inaccurate results
- âŠ