5,222 research outputs found

    Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations

    Full text link
    Latent factor models for recommender systems represent users and items as low dimensional vectors. Privacy risks of such systems have previously been studied mostly in the context of recovery of personal information in the form of usage records from the training data. However, the user representations themselves may be used together with external data to recover private user information such as gender and age. In this paper we show that user vectors calculated by a common recommender system can be exploited in this way. We propose the privacy-adversarial framework to eliminate such leakage of private information, and study the trade-off between recommender performance and leakage both theoretically and empirically using a benchmark dataset. An advantage of the proposed method is that it also helps guarantee fairness of results, since all implicit knowledge of a set of attributes is scrubbed from the representations used by the model, and thus can't enter into the decision making. We discuss further applications of this method towards the generation of deeper and more insightful recommendations.Comment: International Conference on Pattern Recognition and Method

    Privacy risks in recommender systems

    Full text link

    Multi-Agent Modeling of Risk-Aware and Privacy-Preserving Recommender Systems

    Get PDF
    Recent progress in the field of recommender systems has led to increases in the accuracy and significant improvements in the personalization of recommendations. These results are being achieved in general by gathering more user data and generating relevant insights from it. However, user privacy concerns are often underestimated and recommendation risks are not usually addressed. In fact, many users are not sufficiently aware of what data is collected about them and how the data is collected (e.g., whether third parties are collecting and selling their personal information). Research in the area of recommender systems should strive towards not only achieving high accuracy of the generated recommendations but also protecting the user’s privacy and making recommender systems aware of the user’s context, which involves the user’s intentions and the user’s current situation. Through research it has been established that a tradeoff is required between the accuracy, the privacy and the risks in a recommender system and that it is highly unlikely to have recommender systems completely satisfying all the context-aware and privacy-preserving requirements. Nonetheless, a significant attempt can be made to describe a novel modeling approach that supports designing a recommender system encompassing some of these previously mentioned requirements. This thesis focuses on a multi-agent based system model of recommender systems by introducing both privacy and risk-related abstractions into traditional recommender systems and breaking down the system into three different subsystems. Such a description of the system will be able to represent a subset of recommender systems which can be classified as both risk-aware and privacy-preserving. The applicability of the approach is illustrated by a case study involving a job recommender system in which the general design model is instantiated to represent the required domain-specific abstractions

    Recommender systems and their ethical challenges

    Get PDF
    This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system

    User's Privacy in Recommendation Systems Applying Online Social Network Data, A Survey and Taxonomy

    Full text link
    Recommender systems have become an integral part of many social networks and extract knowledge from a user's personal and sensitive data both explicitly, with the user's knowledge, and implicitly. This trend has created major privacy concerns as users are mostly unaware of what data and how much data is being used and how securely it is used. In this context, several works have been done to address privacy concerns for usage in online social network data and by recommender systems. This paper surveys the main privacy concerns, measurements and privacy-preserving techniques used in large-scale online social networks and recommender systems. It is based on historical works on security, privacy-preserving, statistical modeling, and datasets to provide an overview of the technical difficulties and problems associated with privacy preserving in online social networks.Comment: 26 pages, IET book chapter on big data recommender system

    The Users' Perspective on the Privacy-Utility Trade-offs in Health Recommender Systems

    Full text link
    Privacy is a major good for users of personalized services such as recommender systems. When applied to the field of health informatics, privacy concerns of users may be amplified, but the possible utility of such services is also high. Despite availability of technologies such as k-anonymity, differential privacy, privacy-aware recommendation, and personalized privacy trade-offs, little research has been conducted on the users' willingness to share health data for usage in such systems. In two conjoint-decision studies (sample size n=521), we investigate importance and utility of privacy-preserving techniques related to sharing of personal health data for k-anonymity and differential privacy. Users were asked to pick a preferred sharing scenario depending on the recipient of the data, the benefit of sharing data, the type of data, and the parameterized privacy. Users disagreed with sharing data for commercial purposes regarding mental illnesses and with high de-anonymization risks but showed little concern when data is used for scientific purposes and is related to physical illnesses. Suggestions for health recommender system development are derived from the findings.Comment: 32 pages, 12 figure

    PrivacyCanary: Privacy-aware recommenders with adaptive input obfuscation

    Get PDF
    Abstract—Recommender systems are widely used by online retailers to promote products and content that are most likely to be of interest to a specific customer. In such systems, users often implicitly or explicitly rate products they have consumed, and some form of collaborative filtering is used to find other users with similar tastes to whom the products can be recommended. While users can benefit from more targeted and relevant recom-mendations, they are also exposed to greater risks of privacy loss, which can lead to undesirable financial and social consequences. The use of obfuscation techniques to preserve the privacy of user ratings is well studied in the literature. However, works on obfuscation typically assume that all users uniformly apply the same level of obfuscation. In a heterogeneous environment, in which users adopt different levels of obfuscation based on their comfort level, the different levels of obfuscation may impact the users in the system in a different way. In this work we consider such a situation and make the following contributions: (a) using an offline dataset, we evaluate the privacy-utility trade-off in a system where a varying portion of users adopt the privacy preserving technique. Our study highlights the effects that each user’s choices have, not only on their own experience but also on the utility that other users will gain from the system; and (b) we propose PrivacyCanary, an interactive system that enables users to directly control the privacy-utility trade-off of the recommender system to achieve a desired accuracy while maximizing privacy protection, by probing the system via a private (i.e., undisclosed to the system) set of items. We evaluate the performance of our system with an off-line recommendations dataset, and show its effectiveness in balancing a target recommender accuracy with user privacy, compared to approaches that focus on a fixed privacy level. I
    • …
    corecore