603,076 research outputs found

    Privacy and machine learning-based artificial intelligence:Philosophical, legal, and technical investigations

    Get PDF
    This dissertation explores how machine learning-based artificial intelligence (ML-based AI) impacts information privacy, particularly analysing how inference as a process associated with ML affects information privacy. Furthermore, this research highlights the limitations of the General Data Protection Regulation (GDPR) in addressing issues concerning inference, and suggests design requirements to embed the value of privacy into systems.In its philosophical investigation, this dissertation distinguishes between various components and activities related to inference, including inferred information, AI models’ performance, and accessing anonymous information uncovered by ML models. Two aspects of privacy are considered: the descriptive, which pertains to its definition, and the normative, which relates to its value and the right to privacy. The investigation explores how inferred information affects the definition of privacy, the influence of AI models’ performance on the social value of privacy, and the implications of accessing information uncovered by ML models for group privacy, more precisely the group right to privacy.In its legal investigation, this dissertation examines the GDPR’s effectiveness in addressing privacy issues related to information inferred about or ascribed to a person as a member of a group, as well as information derived from inference about a group as a whole.In its technical investigation, this research proposes design requirements to embed the social value of privacy into systems. It develops a value hierarchy for privacy in which the highest layer examines the relationships between privacy and social autonomy, the middle layer identifies norms regarding promoting or protecting social autonomy, and the lowest layer translates those norms into design requirements

    Publishing Microdata with a Robust Privacy Guarantee

    Full text link
    Today, the publication of microdata poses a privacy threat. Vast research has striven to define the privacy condition that microdata should satisfy before it is released, and devise algorithms to anonymize the data so as to achieve this condition. Yet, no method proposed to date explicitly bounds the percentage of information an adversary gains after seeing the published data for each sensitive value therein. This paper introduces beta-likeness, an appropriately robust privacy model for microdata anonymization, along with two anonymization schemes designed therefor, the one based on generalization, and the other based on perturbation. Our model postulates that an adversary's confidence on the likelihood of a certain sensitive-attribute (SA) value should not increase, in relative difference terms, by more than a predefined threshold. Our techniques aim to satisfy a given beta threshold with little information loss. We experimentally demonstrate that (i) our model provides an effective privacy guarantee in a way that predecessor models cannot, (ii) our generalization scheme is more effective and efficient in its task than methods adapting algorithms for the k-anonymity model, and (iii) our perturbation method outperforms a baseline approach. Moreover, we discuss in detail the resistance of our model and methods to attacks proposed in previous research.Comment: VLDB201

    Privacy as a Part of the Preference Structure of Users App Buying Decision

    Get PDF
    Information privacy and personal data in information systems are referred to as the ‘new oil’ of the 21st century. The mass adoption of smart mobile devices, sensor-enabled smart IoT-devices, and mobile applications provide virtually endless possibilities of gathering users’ personal information. Previous research suggests that users attribute very little monetary value to their information privacy. The current paper assumes that users are not able to monetize their value of privacy due to its abstract nature and non-transparent context. By defining privacy as a crucial product attribute of mobile applications the authors provide an approach to measure the importance of privacy as part of users’ preference structure. The results of the conducted choice-based conjoint Analysis emphasize the high relevance of privacy in users’ preference structure when downloading an app and provide an interesting contribution for theory and practice

    A Theory of Pricing Private Data

    Full text link
    Personal data has value to both its owner and to institutions who would like to analyze it. Privacy mechanisms protect the owner's data while releasing to analysts noisy versions of aggregate query results. But such strict protections of individual's data have not yet found wide use in practice. Instead, Internet companies, for example, commonly provide free services in return for valuable sensitive information from users, which they exploit and sometimes sell to third parties. As the awareness of the value of the personal data increases, so has the drive to compensate the end user for her private information. The idea of monetizing private data can improve over the narrower view of hiding private data, since it empowers individuals to control their data through financial means. In this paper we propose a theoretical framework for assigning prices to noisy query answers, as a function of their accuracy, and for dividing the price amongst data owners who deserve compensation for their loss of privacy. Our framework adopts and extends key principles from both differential privacy and query pricing in data markets. We identify essential properties of the price function and micro-payments, and characterize valid solutions.Comment: 25 pages, 2 figures. Best Paper Award, to appear in the 16th International Conference on Database Theory (ICDT), 201

    Secure and Privacy-Preserving Average Consensus

    Full text link
    Average consensus is fundamental for distributed systems since it underpins key functionalities of such systems ranging from distributed information fusion, decision-making, to decentralized control. In order to reach an agreement, existing average consensus algorithms require each agent to exchange explicit state information with its neighbors. This leads to the disclosure of private state information, which is undesirable in cases where privacy is of concern. In this paper, we propose a novel approach that enables secure and privacy-preserving average consensus in a decentralized architecture in the absence of any trusted third-parties. By leveraging homomorphic cryptography, our approach can guarantee consensus to the exact value in a deterministic manner. The proposed approach is light-weight in computation and communication, and applicable to time-varying interaction topology cases. A hardware implementation is presented to demonstrate the capability of our approach.Comment: 7 pages, 4 figures, paper is accepted to CPS-SPC'1
    corecore