277 research outputs found

    Blinder: End-to-end Privacy Protection in Sensing Systems via Personalized Federated Learning

    Full text link
    This paper proposes a sensor data anonymization model that is trained on decentralized data and strikes a desirable trade-off between data utility and privacy, even in heterogeneous settings where the sensor data have different underlying distributions. Our anonymization model, dubbed Blinder, is based on a variational autoencoder and one or multiple discriminator networks trained in an adversarial fashion. We use the model-agnostic meta-learning framework to adapt the anonymization model trained via federated learning to each user's data distribution. We evaluate Blinder under different settings and show that it provides end-to-end privacy protection on two IMU datasets at the cost of increasing privacy loss by up to 4.00% and decreasing data utility by up to 4.24%, compared to the state-of-the-art anonymization model trained on centralized data. We also showcase Blinder's ability to anonymize the radio frequency sensing modality. Our experiments confirm that Blinder can obscure multiple private attributes at once, and has sufficiently low power consumption and computational overhead for it to be deployed on edge devices and smartphones to perform real-time anonymization of sensor data

    Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments

    Get PDF
    Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems

    Differentially Private AirComp Federated Learning with Power Adaptation Harnessing Receiver Noise

    Full text link
    Over-the-air computation (AirComp)-based federated learning (FL) enables low-latency uploads and the aggregation of machine learning models by exploiting simultaneous co-channel transmission and the resultant waveform superposition. This study aims at realizing secure AirComp-based FL against various privacy attacks where malicious central servers infer clients' private data from aggregated global models. To this end, a differentially private AirComp-based FL is designed in this study, where the key idea is to harness receiver noise perturbation injected to aggregated global models inherently, thereby preventing the inference of clients' private data. However, the variance of the inherent receiver noise is often uncontrollable, which renders the process of injecting an appropriate noise perturbation to achieve a desired privacy level quite challenging. Hence, this study designs transmit power control across clients, wherein the received signal level is adjusted intentionally to control the noise perturbation levels effectively, thereby achieving the desired privacy level. It is observed that a higher privacy level requires lower transmit power, which indicates the tradeoff between the privacy level and signal-to-noise ratio (SNR). To understand this tradeoff more fully, the closed-form expressions of SNR (with respect to the privacy level) are derived, and the tradeoff is analytically demonstrated. The analytical results also demonstrate that among the configurable parameters, the number of participating clients is a key parameter that enhances the received SNR under the aforementioned tradeoff. The analytical results are validated through numerical evaluations.Comment: 6 pages, 4 figure

    Local Differential Privacy In Smart Manufacturing: Application Scenario, Mechanisms and Tools

    Get PDF
    To utilize the potential of machine learning and deep learning, enormous amounts of data are required. To find the optimal solution, it is beneficial to share and publish data sets. Due to privacy leaks in publically released datasets and the exposure of sensitive information of individuals by attackers, the research field of differential privacy addresses solutions to avoid this in the future. Compared to other domains, the application of differential privacy in the manufacturing context is very challenging. Manufacturing data contains sensitive information about the companies and their process knowledge, products, and orders. Furthermore, data of individuals operating machines could be exposed and thus their performance evaluated. This paper describes scenarios of how differential privacy can be used in the manufacturing context. In particular, the potential threats that arise when sharing manufacturing data are addressed. This is described by identifying different manufacturing parameters and their variable types. Simplified examples show how the differentially private mechanisms can be applied to binary, numeric, categorical variables, and time series. Finally, libraries are presented which enable the productive use of differential privacy

    Fingerprint Attack: Client De-Anonymization in Federated Learning

    Full text link
    Federated Learning allows collaborative training without data sharing in settings where participants do not trust the central server and one another. Privacy can be further improved by ensuring that communication between the participants and the server is anonymized through a shuffle; decoupling the participant identity from their data. This paper seeks to examine whether such a defense is adequate to guarantee anonymity, by proposing a novel fingerprinting attack over gradients sent by the participants to the server. We show that clustering of gradients can easily break the anonymization in an empirical study of learning federated language models on two language corpora. We then show that training with differential privacy can provide a practical defense against our fingerprint attack.Comment: ECAI 202
    • …
    corecore