19 research outputs found

    Federated Learning with Bayesian Differential Privacy

    Full text link
    We consider the problem of reinforcing federated learning with formal privacy guarantees. We propose to employ Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, to provide sharper privacy loss bounds. We adapt the Bayesian privacy accounting method to the federated setting and suggest multiple improvements for more efficient privacy budgeting at different levels. Our experiments show significant advantage over the state-of-the-art differential privacy bounds for federated learning on image classification tasks, including a medical application, bringing the privacy budget below 1 at the client level, and below 0.1 at the instance level. Lower amounts of noise also benefit the model accuracy and reduce the number of communication rounds.Comment: Accepted at 2019 IEEE International Conference on Big Data (IEEE Big Data 2019). 10 pages, 2 figures, 4 tables. arXiv admin note: text overlap with arXiv:1901.0969

    A Novel Privacy-Preserved Recommender System Framework based on Federated Learning

    Full text link
    Recommender System (RS) is currently an effective way to solve information overload. To meet users' next click behavior, RS needs to collect users' personal information and behavior to achieve a comprehensive and profound user preference perception. However, these centrally collected data are privacy-sensitive, and any leakage may cause severe problems to both users and service providers. This paper proposed a novel privacy-preserved recommender system framework (PPRSF), through the application of federated learning paradigm, to enable the recommendation algorithm to be trained and carry out inference without centrally collecting users' private data. The PPRSF not only able to reduces the privacy leakage risk, satisfies legal and regulatory requirements but also allows various recommendation algorithms to be applied

    A Probabilistic Framework for Modeling the Variability Across Federated Datasets of Heterogeneous Multi-View Observations

    Get PDF
    International audienceWe propose a novel federated learning paradigm to model data variability among heterogeneous clients in multi-centric studies. Our method is expressed through a hierarchical Bayesian latent variable model, where client-specific parameters are assumed to be realization from a global distribution at the master level, which is in turn estimated to account for data bias and variability across clients. We show that our framework can be effectively optimized through expectation maximization over latent master's distribution and clients' parameters. We tested our method on the analysis of multi-modal medical imaging data and clinical scores from distributed clinical datasets of patients affected by Alzheimer's disease. We demonstrate that our method is robust when data is distributed either in iid and non-iid manners: it allows to quantify the variability of data, views and centers, while guaranteeing high-quality data reconstruction as compared to the state-of-the-art autoencoding models and federated learning schemes
    corecore