90,241 research outputs found

    Privacy In Multi-Agent And Dynamical Systems

    Get PDF
    The use of private data is pivotal for numerous services including location--based ones, collaborative recommender systems, and social networks. Despite the utility these services provide, the usage of private data raises privacy concerns to their owners. Noise--injecting techniques, such as differential privacy, address these concerns by adding artificial noise such that an adversary with access to the published response cannot confidently infer the private data. Particularly, in multi--agent and dynamical environments, privacy--preserving techniques need to be expressive enough to capture time--varying privacy needs, multiple data owners, and multiple data users. Current work in differential privacy assumes that a single response gets published and a single predefined privacy guarantee is provided. This work relaxes these assumptions by providing several problem formulations and their approaches. In the setting of a social network, a data owner has different privacy needs against different users. We design a coalition--free privacy--preserving mechanism that allows a data owner to diffuse their private data over a network. We also formulate the problem of multiple data owners that provide their data to multiple data users. Also, for time--varying privacy needs, we prove that, for a class of existing privacy--preserving mechanism, it is possible to effectively relax privacy constraints gradually. Additionally, we provide a privacy--aware mechanism for time--varying private data, where we wish to protect only the current value of it. Finally, in the context of location--based services, we provide a mechanism where the strength of the privacy guarantees varies with the local population density. These contributions increase the applicability of differential privacy and set future directions for more flexible and expressive privacy guarantees

    Social-Aware Clustered Federated Learning with Customized Privacy Preservation

    Full text link
    A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. We unfold the design of SCFL in three steps. \emph{i) Stable social cluster formation. Considering users' heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping}. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants' model updates depending on social trust degrees. iii) Distributed convergence}. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection

    Preserving differential privacy under finite-precision semantics

    Get PDF
    International audienceThe approximation introduced by finite-precision representation of continuous data can induce arbitrarily large information leaks even when the computation using exact semantics is secure. Such leakage can thus undermine design efforts aimed at protecting sensitive information. We focus here on differential privacy, an approach to privacy that emerged from the area of statistical databases and is now widely applied also in other domains. In this approach, privacy is protected by adding noise to the values correlated to the private data. The typical mechanisms used to achieve differential privacy have been proved correct in the ideal case in which computations are made using infinite-precision semantics. In this paper, we analyze the situation at the implementation level, where the semantics is necessarily limited by finite precision, i.e., the representation of real numbers and the operations on them are rounded according to some level of precision. We show that in general there are violations of the differential privacy property, and we study the conditions under which we can still guarantee a limited (but, arguably, acceptable) variant of the property, under only a minor degradation of the privacy level. Finally, we illustrate our results on two examples: the standard Laplacian mechanism commonly used in differential privacy, and a bivariate version of it recently introduced in the setting of privacy-aware geolocation

    Empowering users to control their privacy in context-aware system through interactive consent

    Get PDF
    Context-aware systems adapt their behaviour based on the context a user is in. Since context is potentially privacy sensitive information, users should be empowered to control how much of their context they are willing to share, under what conditions and for what purpose. We propose an interactive consent mechanism that allows this. It is interactive in the sense that users are asked for consent when a request for their context information is received. Our interactive consent mechanism complements a more traditional pre-configuration approach. We describe the architecture, the implementation of our interactive consent mechanism and a use case

    Wireless Network Design and Optimization: From Social Awareness to Security

    Get PDF
    abstract: A principal goal of this dissertation is to study wireless network design and optimization with the focus on two perspectives: 1) socially-aware mobile networking and computing; 2) security and privacy in wireless networking. Under this common theme, this dissertation can be broadly organized into three parts. The first part studies socially-aware mobile networking and computing. First, it studies random access control and power control under a social group utility maximization (SGUM) framework. The socially-aware Nash equilibria (SNEs) are derived and analyzed. Then, it studies mobile crowdsensing under an incentive mechanism that exploits social trust assisted reciprocity (STAR). The efficacy of the STAR mechanism is thoroughly investigated. Next, it studies mobile users' data usage behaviors under the impact of social services and the wireless operator's pricing. Based on a two-stage Stackelberg game formulation, the user demand equilibrium (UDE) is analyzed in Stage II and the optimal pricing strategy is developed in Stage I. Last, it studies opportunistic cooperative networking under an optimal stopping framework with two-level decision-making. For both cases with or without dedicated relays, the optimal relaying strategies are derived and analyzed. The second part studies radar sensor network coverage for physical security. First, it studies placement of bistatic radar (BR) sensor networks for barrier coverage. The optimality of line-based placement is analyzed, and the optimal placement of BRs on a line segment is characterized. Then, it studies the coverage of radar sensor networks that exploits the Doppler effect. Based on a Doppler coverage model, an efficient method is devised to characterize Doppler-covered regions and an algorithm is developed to find the minimum radar density required for Doppler coverage. The third part studies cyber security and privacy in socially-aware networking and computing. First, it studies random access control, cooperative jamming, and spectrum access under an extended SGUM framework that incorporates negative social ties. The SNEs are derived and analyzed. Then, it studies pseudonym change for personalized location privacy under the SGUM framework. The SNEs are analyzed and an efficient algorithm is developed to find an SNE with desirable properties.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Exploring Consumers’ Attitudes of Smart TV Related Privacy Risks

    Get PDF
    A number of privacy risks are inherent in the Smart TV ecosystem. It is likely that many consumers are unaware of these privacy risks. Alternatively, they might be aware but consider the privacy risks acceptable. In order to explore this, we carried out an online survey with 200 participants to determine whether consumers were aware of Smart TV related privacy risks. The responses revealed a meagre level of awareness. We also explored consumers’ attitudes towards specific Smart TV related privacy risks. We isolated a number of factors that influenced rankings and used these to develop awareness-raising messages. We tested these messages in an online survey with 155 participants. The main finding was that participants were generally unwilling to disconnect their Smart TVs from the Internet because they valued the Smart TV’s Internet functionality more than their privacy. We subsequently evaluated the awareness-raising messages in a second survey with 169 participants, framing the question differently. We asked participants to choose between five different Smart TV Internet connection options, two of which retained functionality but entailed expending time and/or effort to preserve privacy
    • 

    corecore