7 research outputs found

    A privacy-protecting architecture for collaborative filtering via forgery and suppression of ratings

    No full text
    Recommendation systems are information-filtering systems that help users deal with information overload. Unfortunately, current recommendation systems prompt serious privacy concerns. In this work, we propose an architecture that protects user privacy in such collaborative-filtering systems, in which users are profiled on the basis of their ratings. Our approach capitalizes on the combination of two perturbative techniques, namely the forgery and the suppression of ratings. In our scenario, users rate those items they have an opinion on. However, in order to avoid privacy risks, they may want to refrain from rating some of those items, and/or rate some items that do not reflect their actual preferences. On the other hand, forgery and suppression may degrade the quality of the recommendation system. Motivated by this, we describe the implementation details of the proposed architecture and present a formulation of the optimal trade-off among privacy, forgery rate and suppression rate. Finally, we provide a numerical example that illustrates our formulation.Peer ReviewedPostprint (published version

    A privacy-protecting architecture for recommendation systems via the suppression of ratings

    No full text
    Recommendation systems are information-filtering systems that help users deal with information overload. Unfortunately, current recommendation systems prompt serious privacy concerns. In this work, we propose an architecture that enables users to enhance their privacy in those systems that profile users on the basis of the items rated. Our approach capitalizes on a conceptually-simple perturbative technique, namely the suppression of ratings. In our scenario, users rate those items they have an opinion on. However, in order to avoid being accurately profiled, they may want to refrain from rating certain items. Consequently, this technique protects user privacy to a certain extent, but at the cost of a degradation in the accuracy of the recommendation. We measure privacy risk as the Kullback-Leibler divergence between the user's and the population's rating distribution, a privacy criterion that we proposed in previous work. The justification of such a criterion is our second contribution. Concretely, we thoroughly interpret it by elaborating on the intimate connection between the celebrated method of entropy maximization and the use of entropies and divergences as measures of privacy. The ultimate purpose of this justification is to attempt to bridge the gap between the privacy and the information-theoretic communities by substantially adapting some technicalities of our original work to reach a wider audience, not intimately familiar with information theory and the method of types. Lastly, we present a formulation of the optimal trade-o_ between privacy and suppression rate, what allows us to formally specify one of the functional blocks of the proposed architecture.Peer ReviewedPreprin

    Zero-Knowledge Proof-of-Identity: Sybil-Resistant, Anonymous Authentication on Permissionless Blockchains and Incentive Compatible, Strictly Dominant Cryptocurrencies

    Get PDF
    Zero-Knowledge Proof-of-Identity from trusted public certificates (e.g., national identity cards and/or ePassports; eSIM) is introduced here to permissionless blockchains in order to remove the inefficiencies of Sybil-resistant mechanisms such as Proof-of-Work (i.e., high energy and environmental costs) and Proof-of-Stake (i.e., capital hoarding and lower transaction volume). The proposed solution effectively limits the number of mining nodes a single individual would be able to run while keeping membership open to everyone, circumventing the impossibility of full decentralization and the blockchain scalability trilemma when instantiated on a blockchain with a consensus protocol based on the cryptographic random selection of nodes. Resistance to collusion is also considered. Solving one of the most pressing problems in blockchains, a zk-PoI cryptocurrency is proved to have the following advantageous properties: - an incentive-compatible protocol for the issuing of cryptocurrency rewards based on a unique Nash equilibrium - strict domination of mining over all other PoW/PoS cryptocurrencies, thus the zk-PoI cryptocurrency becoming the preferred choice by miners is proved to be a Nash equilibrium and the Evolutionarily Stable Strategy - PoW/PoS cryptocurrencies are condemned to pay the Price of Crypto-Anarchy, redeemed by the optimal efficiency of zk-PoI as it implements the social optimum - the circulation of a zk-PoI cryptocurrency Pareto dominates other PoW/PoS cryptocurrencies - the network effects arising from the social networks inherent to national identity cards and ePassports dominate PoW/PoS cryptocurrencies - the lower costs of its infrastructure imply the existence of a unique equilibrium where it dominates other forms of paymentComment: 2.1: Proof-of-Personhood Considered Harmful (and Illegal); 4.1.5: Absence of Active Authentication; 4.2.6: Absence of Active Authentication; 4.2.7: Removing Single-Points of Failure; 4.3.2: Combining with Non-Zero-Knowledge Authentication; 4.4: Circumventing the Impossibility of Full Decentralizatio

    Privacy protection of user profiles in personalized information systems

    Get PDF
    In recent times we are witnessing the emergence of a wide variety of information systems that tailor the information-exchange functionality to meet the specific interests of their users. Most of these personalized information systems capitalize on, or lend themselves to, the construction of profiles, either directly declared by a user, or inferred from past activity. The ability of these systems to profile users is therefore what enables such intelligent functionality, but at the same time, it is the source of serious privacy concerns. Although there exists a broad range of privacy-enhancing technologies aimed to mitigate many of those concerns, the fact is that their use is far from being widespread. The main reason is that there is a certain ambiguity about these technologies and their effectiveness in terms of privacy protection. Besides, since these technologies normally come at the expense of system functionality and utility, it is challenging to assess whether the gain in privacy compensates for the costs in utility. Assessing the privacy provided by a privacy-enhancing technology is thus crucial to determine its overall benefit, to compare its effectiveness with other technologies, and ultimately to optimize it in terms of the privacy-utility trade-off posed. Considerable effort has consequently been devoted to investigating both privacy and utility metrics. However, most of these metrics are specific to concrete systems and adversary models, and hence are difficult to generalize or translate to other contexts. Moreover, in applications involving user profiles, there are a few proposals for the evaluation of privacy, and those existing are not appropriately justified or fail to justify the choice. The first part of this thesis approaches the fundamental problem of quantifying user privacy. Firstly, we present a theoretical framework for privacy-preserving systems, endowed with a unifying view of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. Our theoretical analysis shows that numerous privacy metrics emerging from a broad spectrum of applications are bijectively related to this estimation error, which permits interpreting and comparing these metrics under a common perspective. Secondly, we tackle the issue of measuring privacy in the enthralling application of personalized information systems. Specifically, we propose two information-theoretic quantities as measures of the privacy of user profiles, and justify these metrics by building on Jaynes' rationale behind entropy-maximization methods and fundamental results from the method of types and hypothesis testing. Equipped with quantifiable measures of privacy and utility, the second part of this thesis investigates privacy-enhancing, data-perturbative mechanisms and architectures for two important classes of personalized information systems. In particular, we study the elimination of tags in semantic-Web applications, and the combination of the forgery and the suppression of ratings in personalized recommendation systems. We design such mechanisms to achieve the optimal privacy-utility trade-off, in the sense of maximizing privacy for a desired utility, or vice versa. We proceed in a systematic fashion by drawing upon the methodology of multiobjective optimization. Our theoretical analysis finds a closed-form solution to the problem of optimal tag suppression, and to the problem of optimal forgery and suppression of ratings. In addition, we provide an extensive theoretical characterization of the trade-off between the contrasting aspects of privacy and utility. Experimental results in real-world applications show the effectiveness of our mechanisms in terms of privacy protection, system functionality and data utility

    The SPARTA pseudonym and authorization system

    Get PDF
    This paper deals with privacy-preserving (pseudonymized) access to a service resource. In such a scenario, two opposite needs seem to emerge. On one side, the service provider may want to control, in first place, the user accessing its resources, i.e., without being forced to delegate the issuing of access permissions to third parties to meet privacy requirements. On the other side, it should be technically possible to trace back the real identity of a user upon dishonest behavior, and of course, this must be necessary accomplished by an external authority distinct from the provider itself. The framework described in this paper aims at coping with these two opposite needs. This is accomplished through (i) a distributed third-party-based infrastructure devised to assign and manage pseudonym certificates, decoupled from (ii) a two-party procedure, devised to bind an authorization permission to a pseudonym certificate with no third-party involvement. The latter procedure is based on a novel blind signature approach which allows the provider to blindly verify, at service subscription time, that the user possesses the private key of the still undisclosed pseudonym certificate, thus avoiding transferability of the authorization permission. (C) 2008 Elsevier B.V. All rights reserved
    corecore