169,407 research outputs found

    Privacy-Aware Data Acquisition under Data Similarity in Regression Markets

    Full text link
    Data markets facilitate decentralized data exchange for applications such as prediction, learning, or inference. The design of these markets is challenged by varying privacy preferences as well as data similarity among data owners. Related works have often overlooked how data similarity impacts pricing and data value through statistical information leakage. We demonstrate that data similarity and privacy preferences are integral to market design and propose a query-response protocol using local differential privacy for a two-party data acquisition mechanism. In our regression data market model, we analyze strategic interactions between privacy-aware owners and the learner as a Stackelberg game over the asked price and privacy factor. Finally, we numerically evaluate how data similarity affects market participation and traded data value.Comment: Submitted to IEEE Transactions on Neural Networks and Learning Systems (submission version

    Privacy In Multi-Agent And Dynamical Systems

    Get PDF
    The use of private data is pivotal for numerous services including location--based ones, collaborative recommender systems, and social networks. Despite the utility these services provide, the usage of private data raises privacy concerns to their owners. Noise--injecting techniques, such as differential privacy, address these concerns by adding artificial noise such that an adversary with access to the published response cannot confidently infer the private data. Particularly, in multi--agent and dynamical environments, privacy--preserving techniques need to be expressive enough to capture time--varying privacy needs, multiple data owners, and multiple data users. Current work in differential privacy assumes that a single response gets published and a single predefined privacy guarantee is provided. This work relaxes these assumptions by providing several problem formulations and their approaches. In the setting of a social network, a data owner has different privacy needs against different users. We design a coalition--free privacy--preserving mechanism that allows a data owner to diffuse their private data over a network. We also formulate the problem of multiple data owners that provide their data to multiple data users. Also, for time--varying privacy needs, we prove that, for a class of existing privacy--preserving mechanism, it is possible to effectively relax privacy constraints gradually. Additionally, we provide a privacy--aware mechanism for time--varying private data, where we wish to protect only the current value of it. Finally, in the context of location--based services, we provide a mechanism where the strength of the privacy guarantees varies with the local population density. These contributions increase the applicability of differential privacy and set future directions for more flexible and expressive privacy guarantees

    Low-Rank Mechanism: Optimizing Batch Queries under Differential Privacy

    Full text link
    Differential privacy is a promising privacy-preserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results, while satisfying the privacy guarantees. Previous work, notably the matrix mechanism, has suggested that processing a batch of correlated queries as a whole can potentially achieve considerable accuracy gains, compared to answering them individually. However, as we point out in this paper, the matrix mechanism is mainly of theoretical interest; in particular, several inherent problems in its design limit its accuracy in practice, which almost never exceeds that of naive methods. In fact, we are not aware of any existing solution that can effectively optimize a query batch under differential privacy. Motivated by this, we propose the Low-Rank Mechanism (LRM), the first practical differentially private technique for answering batch queries with high accuracy, based on a low rank approximation of the workload matrix. We prove that the accuracy provided by LRM is close to the theoretical lower bound for any mechanism to answer a batch of queries under differential privacy. Extensive experiments using real data demonstrate that LRM consistently outperforms state-of-the-art query processing solutions under differential privacy, by large margins.Comment: VLDB201

    Social-Aware Clustered Federated Learning with Customized Privacy Preservation

    Full text link
    A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. We unfold the design of SCFL in three steps. \emph{i) Stable social cluster formation. Considering users' heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping}. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants' model updates depending on social trust degrees. iii) Distributed convergence}. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection

    Preserving differential privacy under finite-precision semantics

    Get PDF
    International audienceThe approximation introduced by finite-precision representation of continuous data can induce arbitrarily large information leaks even when the computation using exact semantics is secure. Such leakage can thus undermine design efforts aimed at protecting sensitive information. We focus here on differential privacy, an approach to privacy that emerged from the area of statistical databases and is now widely applied also in other domains. In this approach, privacy is protected by adding noise to the values correlated to the private data. The typical mechanisms used to achieve differential privacy have been proved correct in the ideal case in which computations are made using infinite-precision semantics. In this paper, we analyze the situation at the implementation level, where the semantics is necessarily limited by finite precision, i.e., the representation of real numbers and the operations on them are rounded according to some level of precision. We show that in general there are violations of the differential privacy property, and we study the conditions under which we can still guarantee a limited (but, arguably, acceptable) variant of the property, under only a minor degradation of the privacy level. Finally, we illustrate our results on two examples: the standard Laplacian mechanism commonly used in differential privacy, and a bivariate version of it recently introduced in the setting of privacy-aware geolocation

    Privacy Preservation of Semantic Trajectory Databases using Query Auditing Techniques

    Get PDF
    ABSTRACT Existing approaches that publish anonymized spatiotemporal traces of mobile humans deal with the preservation of privacy operating under the assumption that most of the information in the original dataset can be disclosed without causing any privacy violation. However, an alternative strategy considers that data stays in-house to the hosting organization and privacy-preserving mobility data management systems are in charge of privacy-aware sharing of the mobility data. Furthermore, human trajectories are nowadays enriched with semantic information by using background geographic information and/or by user-provided data via location-based social media. This new type of representation of personal movements as sequences of places visited by a person during his/her movement poses even greater privacy violation threats. To facilitate privacy-aware sharing of mobility data, we design a semantic-aware MOD engine were all potential privacy breaches that may occur when answering a query, are prevented through an auditing mechanism. Moreover, in order to improve user friendliness and system functionality of the aforementioned engine, we propose Zoom-Out algorithm as a distinct component, whose objective is to modify the initial query that cannot be answered at first due to privacy violation, to the 'nearest' query that can be possibly answered with 'safety'

    Privacy-Preserving Image Sharing via Sparsifying Layers on Convolutional Groups

    Full text link
    We propose a practical framework to address the problem of privacy-aware image sharing in large-scale setups. We argue that, while compactness is always desired at scale, this need is more severe when trying to furthermore protect the privacy-sensitive content. We therefore encode images, such that, from one hand, representations are stored in the public domain without paying the huge cost of privacy protection, but ambiguated and hence leaking no discernible content from the images, unless a combinatorially-expensive guessing mechanism is available for the attacker. From the other hand, authorized users are provided with very compact keys that can easily be kept secure. This can be used to disambiguate and reconstruct faithfully the corresponding access-granted images. We achieve this with a convolutional autoencoder of our design, where feature maps are passed independently through sparsifying transformations, providing multiple compact codes, each responsible for reconstructing different attributes of the image. The framework is tested on a large-scale database of images with public implementation available.Comment: Accepted as an oral presentation for ICASSP 202

    Wireless Network Design and Optimization: From Social Awareness to Security

    Get PDF
    abstract: A principal goal of this dissertation is to study wireless network design and optimization with the focus on two perspectives: 1) socially-aware mobile networking and computing; 2) security and privacy in wireless networking. Under this common theme, this dissertation can be broadly organized into three parts. The first part studies socially-aware mobile networking and computing. First, it studies random access control and power control under a social group utility maximization (SGUM) framework. The socially-aware Nash equilibria (SNEs) are derived and analyzed. Then, it studies mobile crowdsensing under an incentive mechanism that exploits social trust assisted reciprocity (STAR). The efficacy of the STAR mechanism is thoroughly investigated. Next, it studies mobile users' data usage behaviors under the impact of social services and the wireless operator's pricing. Based on a two-stage Stackelberg game formulation, the user demand equilibrium (UDE) is analyzed in Stage II and the optimal pricing strategy is developed in Stage I. Last, it studies opportunistic cooperative networking under an optimal stopping framework with two-level decision-making. For both cases with or without dedicated relays, the optimal relaying strategies are derived and analyzed. The second part studies radar sensor network coverage for physical security. First, it studies placement of bistatic radar (BR) sensor networks for barrier coverage. The optimality of line-based placement is analyzed, and the optimal placement of BRs on a line segment is characterized. Then, it studies the coverage of radar sensor networks that exploits the Doppler effect. Based on a Doppler coverage model, an efficient method is devised to characterize Doppler-covered regions and an algorithm is developed to find the minimum radar density required for Doppler coverage. The third part studies cyber security and privacy in socially-aware networking and computing. First, it studies random access control, cooperative jamming, and spectrum access under an extended SGUM framework that incorporates negative social ties. The SNEs are derived and analyzed. Then, it studies pseudonym change for personalized location privacy under the SGUM framework. The SNEs are analyzed and an efficient algorithm is developed to find an SNE with desirable properties.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Empowering users to control their privacy in context-aware system through interactive consent

    Get PDF
    Context-aware systems adapt their behaviour based on the context a user is in. Since context is potentially privacy sensitive information, users should be empowered to control how much of their context they are willing to share, under what conditions and for what purpose. We propose an interactive consent mechanism that allows this. It is interactive in the sense that users are asked for consent when a request for their context information is received. Our interactive consent mechanism complements a more traditional pre-configuration approach. We describe the architecture, the implementation of our interactive consent mechanism and a use case
    • 

    corecore