6,631 research outputs found
When and where do you want to hide? Recommendation of location privacy preferences with local differential privacy
In recent years, it has become easy to obtain location information quite
precisely. However, the acquisition of such information has risks such as
individual identification and leakage of sensitive information, so it is
necessary to protect the privacy of location information. For this purpose,
people should know their location privacy preferences, that is, whether or not
he/she can release location information at each place and time. However, it is
not easy for each user to make such decisions and it is troublesome to set the
privacy preference at each time. Therefore, we propose a method to recommend
location privacy preferences for decision making. Comparing to existing method,
our method can improve the accuracy of recommendation by using matrix
factorization and preserve privacy strictly by local differential privacy,
whereas the existing method does not achieve formal privacy guarantee. In
addition, we found the best granularity of a location privacy preference, that
is, how to express the information in location privacy protection. To evaluate
and verify the utility of our method, we have integrated two existing datasets
to create a rich information in term of user number. From the results of the
evaluation using this dataset, we confirmed that our method can predict
location privacy preferences accurately and that it provides a suitable method
to define the location privacy preference
Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations
Latent factor models for recommender systems represent users and items as low
dimensional vectors. Privacy risks of such systems have previously been studied
mostly in the context of recovery of personal information in the form of usage
records from the training data. However, the user representations themselves
may be used together with external data to recover private user information
such as gender and age. In this paper we show that user vectors calculated by a
common recommender system can be exploited in this way. We propose the
privacy-adversarial framework to eliminate such leakage of private information,
and study the trade-off between recommender performance and leakage both
theoretically and empirically using a benchmark dataset. An advantage of the
proposed method is that it also helps guarantee fairness of results, since all
implicit knowledge of a set of attributes is scrubbed from the representations
used by the model, and thus can't enter into the decision making. We discuss
further applications of this method towards the generation of deeper and more
insightful recommendations.Comment: International Conference on Pattern Recognition and Method
- …