4,356 research outputs found

    An Accuracy-Assured Privacy-Preserving Recommender System for Internet Commerce

    Full text link
    Recommender systems, tool for predicting users' potential preferences by computing history data and users' interests, show an increasing importance in various Internet applications such as online shopping. As a well-known recommendation method, neighbourhood-based collaborative filtering has attracted considerable attention recently. The risk of revealing users' private information during the process of filtering has attracted noticeable research interests. Among the current solutions, the probabilistic techniques have shown a powerful privacy preserving effect. When facing kk Nearest Neighbour attack, all the existing methods provide no data utility guarantee, for the introduction of global randomness. In this paper, to overcome the problem of recommendation accuracy loss, we propose a novel approach, Partitioned Probabilistic Neighbour Selection, to ensure a required prediction accuracy while maintaining high security against kkNN attack. We define the sum of kk neighbours' similarity as the accuracy metric alpha, the number of user partitions, across which we select the kk neighbours, as the security metric beta. We generalise the kk Nearest Neighbour attack to beta k Nearest Neighbours attack. Differing from the existing approach that selects neighbours across the entire candidate list randomly, our method selects neighbours from each exclusive partition of size kk with a decreasing probability. Theoretical and experimental analysis show that to provide an accuracy-assured recommendation, our Partitioned Probabilistic Neighbour Selection method yields a better trade-off between the recommendation accuracy and system security.Comment: replacement for the previous versio

    Privacy-Friendly Collaboration for Cyber Threat Mitigation

    Full text link
    Sharing of security data across organizational boundaries has often been advocated as a promising way to enhance cyber threat mitigation. However, collaborative security faces a number of important challenges, including privacy, trust, and liability concerns with the potential disclosure of sensitive data. In this paper, we focus on data sharing for predictive blacklisting, i.e., forecasting attack sources based on past attack information. We propose a novel privacy-enhanced data sharing approach in which organizations estimate collaboration benefits without disclosing their datasets, organize into coalitions of allied organizations, and securely share data within these coalitions. We study how different partner selection strategies affect prediction accuracy by experimenting on a real-world dataset of 2 billion IP addresses and observe up to a 105% prediction improvement.Comment: This paper has been withdrawn as it has been superseded by arXiv:1502.0533

    Privacy-preserving collaborative recommendations based on random perturbations

    Get PDF
    © 2016 Elsevier Ltd Collaborative recommender systems offer a solution to the information overload problem found in online environments such as e-commerce. The use of collaborative filtering, the most widely used recommendation method, gives rise to potential privacy issues. In addition, the user ratings utilized in collaborative filtering systems to recommend products or services must be protected. The purpose of this research is to provide a solution to the privacy concerns of collaborative filtering users, while maintaining high accuracy of recommendations. This paper proposes a multi-level privacy-preserving method for collaborative filtering systems by perturbing each rating before it is submitted to the server. The perturbation method is based on multiple levels and different ranges of random values for each level. Before the submission of each rating, the privacy level and the perturbation range are selected randomly from a fixed range of privacy levels. The proposed privacy method has been experimentally evaluated with the results showing that with a small decrease of utility, user privacy can be protected, while the proposed approach offers practical and effective results

    A method for privacy-preserving collaborative filtering recommendations

    Get PDF
    With the continuous growth of the Internet and the progress of electronic commerce the issues of product recommendation and privacy protection are becoming increasingly important. Recommender Systems aim to solve the information overload problem by providing accurate recommendations of items to users. Collaborative filtering is considered the most widely used recommendation method for providing recommendations of items or users to other users in online environments. Additionally, collaborative filtering methods can be used with a trust network, thus delivering to the user recommendations from both a database of ratings and from users who the person who made the request knows and trusts. On the other hand, the users are having privacy concerns and are not willing to submit the required information (e.g., ratings for products), thus making the recommender system unusable. In this paper, we propose (a) an approach to product recommendation that is based on collaborative filtering and uses a combination of a ratings network with a trust network of the user to provide recommendations and (b) “neighbourhood privacy” that employs a modified privacy-aware role-based access control model that can be applied to databases that utilize recommender systems. Our proposed approach (1) protects user privacy with a small decrease in the accuracy of the recommendations and (2) uses information from the trust network to increase the accuracy of the recommendations, while, (3) providing privacy-preserving recommendations, as accurate as the recommendations provided without the privacy-preserving approach or the method that increased the accuracy applied

    Fast Differentially Private Matrix Factorization

    Full text link
    Differentially private collaborative filtering is a challenging task, both in terms of accuracy and speed. We present a simple algorithm that is provably differentially private, while offering good performance, using a novel connection of differential privacy to Bayesian posterior sampling via Stochastic Gradient Langevin Dynamics. Due to its simplicity the algorithm lends itself to efficient implementation. By careful systems design and by exploiting the power law behavior of the data to maximize CPU cache bandwidth we are able to generate 1024 dimensional models at a rate of 8.5 million recommendations per second on a single PC
    corecore