65,508 research outputs found

    Privacy-enhancing Aggregation of Internet of Things Data via Sensors Grouping

    Full text link
    Big data collection practices using Internet of Things (IoT) pervasive technologies are often privacy-intrusive and result in surveillance, profiling, and discriminatory actions over citizens that in turn undermine the participation of citizens to the development of sustainable smart cities. Nevertheless, real-time data analytics and aggregate information from IoT devices open up tremendous opportunities for managing smart city infrastructures. The privacy-enhancing aggregation of distributed sensor data, such as residential energy consumption or traffic information, is the research focus of this paper. Citizens have the option to choose their privacy level by reducing the quality of the shared data at a cost of a lower accuracy in data analytics services. A baseline scenario is considered in which IoT sensor data are shared directly with an untrustworthy central aggregator. A grouping mechanism is introduced that improves privacy by sharing data aggregated first at a group level compared as opposed to sharing data directly to the central aggregator. Group-level aggregation obfuscates sensor data of individuals, in a similar fashion as differential privacy and homomorphic encryption schemes, thus inference of privacy-sensitive information from single sensors becomes computationally harder compared to the baseline scenario. The proposed system is evaluated using real-world data from two smart city pilot projects. Privacy under grouping increases, while preserving the accuracy of the baseline scenario. Intra-group influences of privacy by one group member on the other ones are measured and fairness on privacy is found to be maximized between group members with similar privacy choices. Several grouping strategies are compared. Grouping by proximity of privacy choices provides the highest privacy gains. The implications of the strategy on the design of incentives mechanisms are discussed

    Enforcing Secure and Privacy-Preserving Information Brokering in Distributed Information Sharing

    Get PDF
    Today’s organizations raise an increasing need for information sharing via on-demand access. Information Brokering Systems (IBSs) have been proposed to connect large-scale loosely-federated data sources via a brokering overlay, in which the brokers make routing decisions to direct client queries to the requested data servers. Many existing IBSs assume that brokers are trusted and thus only adopt server-side access control for data confidentiality. However, privacy of data location and data consumer can still be inferred from metadata (such as query and access control rules) exchanged within the IBS, but little attention has been put on its protection. In this article, we propose a novel approach to preserve privacy of multiple stakeholders involved in the information brokering process. We are among the first to formally define two privacy attacks, namely attribute-correlation attack and inference attack, and propose two countermeasure schemes automaton segmentation and query segment encryption to securely share the routing decision making responsibility among a selected set brokering servers. With comprehensive security analysis and experimental results, we show that our approach seamlessly integrates security enforcement with query routing to provide system-wide security with insignificant overhead

    Privacy preserving algorithms for newly emergent computing environments

    Get PDF
    Privacy preserving data usage ensures appropriate usage of data without compromising sensitive information. Data privacy is a primary requirement since customers' data is an asset to any organization and it contains customers' private information. Data seclusion cannot be a solution to keep data private. Data sharing as well as keeping data private is important for different purposes, e.g., company welfare, research, business etc. A broad range of industries where data privacy is mandatory includes healthcare, aviation industry, education system, federal law enforcement, etc.In this thesis dissertation we focus on data privacy schemes in emerging fields of computer science, namely, health informatics, data mining, distributed cloud, biometrics, and mobile payments. Linking and mining medical records across different medical service providers are important to the enhancement of health care quality. Under HIPAA regulation keeping medical records private is important. In real-world health care databases, records may well contain errors. Linking the error-prone data and preserving data privacy at the same time is very difficult. We introduce a privacy preserving Error-Tolerant Linking Algorithm to enable medical records linkage for error-prone medical records. Mining frequent sequential patterns such as, patient path, treatment pattern, etc., across multiple medical sites helps to improve health care quality and research. We propose a privacy preserving sequential pattern mining scheme across multiple medical sites. In a distributed cloud environment resources are provided by users who are geographically distributed over a large area. Since resources are provided by regular users, data privacy and security are main concerns. We propose a privacy preserving data storage mechanism among different users in a distributed cloud. Managing secret key for encryption is difficult in a distributed cloud. To protect secret key in a distributed cloud we propose a multilevel threshold secret sharing mechanism. Biometric authentication ensures user identity by means of user's biometric traits. Any individual's biometrics should be protected since biometrics are unique and can be stolen or misused by an adversary. We present a secure and privacy preserving biometric authentication scheme using watermarking technique. Mobile payments have become popular with the extensive use of mobile devices. Mobile applications for payments needs to be very secure to perform transactions and at the same time needs to be efficient. We design and develop a mobile application for secure mobile payments. To secure mobile payments we focus on user's biometric authentication as well as secure bank transaction. We propose a novel privacy preserving biometric authentication algorithm for secure mobile payments

    MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning

    Full text link
    Federated Learning (FL), a distributed machine learning paradigm, has been adapted to mitigate privacy concerns for customers. Despite their appeal, there are various inference attacks that can exploit shared-plaintext model updates to embed traces of customer private information, leading to serious privacy concerns. To alleviate this privacy issue, cryptographic techniques such as Secure Multi-Party Computation and Homomorphic Encryption have been used for privacy-preserving FL. However, such security issues in privacy-preserving FL are poorly elucidated and underexplored. This work is the first attempt to elucidate the triviality of performing model corruption attacks on privacy-preserving FL based on lightweight secret sharing. We consider scenarios in which model updates are quantized to reduce communication overhead in this case, where an adversary can simply provide local parameters outside the legal range to corrupt the model. We then propose the MUD-PQFed protocol, which can precisely detect malicious clients performing attacks and enforce fair penalties. By removing the contributions of detected malicious clients, the global model utility is preserved to be comparable to the baseline global model without the attack. Extensive experiments validate effectiveness in maintaining baseline accuracy and detecting malicious clients in a fine-grained mannerComment: 13 pages,13 figure

    Efficient Privacy Preserving Distributed Clustering Based on Secret Sharing

    Get PDF
    In this paper, we propose a privacy preserving distributed clustering protocol for horizontally partitioned data based on a very efficient homomorphic additive secret sharing scheme. The model we use for the protocol is novel in the sense that it utilizes two non-colluding third parties. We provide a brief security analysis of our protocol from information theoretic point of view, which is a stronger security model. We show communication and computation complexity analysis of our protocol along with another protocol previously proposed for the same problem. We also include experimental results for computation and communication overhead of these two protocols. Our protocol not only outperforms the others in execution time and communication overhead on data holders, but also uses a more efficient model for many data mining applications

    Federated Learning with Reduced Information Leakage and Computation

    Full text link
    Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data. Although local data is not exposed directly, privacy concerns nonetheless exist as clients' sensitive information can be inferred from intermediate computations. Moreover, such information leakage accumulates substantially over time as the same data is repeatedly used during the iterative learning process. As a result, it can be particularly difficult to balance the privacy-accuracy trade-off when designing privacy-preserving FL algorithms. In this paper, we introduce Upcycled-FL, a novel federated learning framework with first-order approximation applied at every even iteration. Under this framework, half of the FL updates incur no information leakage and require much less computation. We first conduct the theoretical analysis on the convergence (rate) of Upcycled-FL, and then apply perturbation mechanisms to preserve privacy. Experiments on real-world data show that Upcycled-FL consistently outperforms existing methods over heterogeneous data, and significantly improves privacy-accuracy trade-off while reducing 48% of the training time on average

    Secrecy-Preserving Reasoning Over Entailment Systems: Theory and Applications

    Get PDF
    Privacy, copyright, security and other concerns make it essential for many distributed web applications to support selective sharing of information while, at the same time, protecting sensitive knowledge. Secrecypreserving reasoning refers to the answering of queries against a knowledge base involving inference that uses sensitive knowledge without revealing it. We present a general framework for secrecy-preserving reasoning over arbitrary entailment systems. This framework enables reasoning with hierarchical ontologies, propositional logic knowledge bases (over arbitrary logics) and RDFS knowledge bases containing sensitive information that needs to be protected. We provide an algorithm that, given a knowledge base over an effectively enumerable entailment system, and a secrecy set over it, defines a maximally informative secrecypreserving reasoner. Secrecy-preserving mappings between knowledge bases that allow reusing reasoners across knowledge bases are introduced
    corecore