7,082 research outputs found

    FEDERATED LEARNING OF BAYESIAN NEURAL NETWORKS

    Get PDF
    Although federated learning and Bayesian neural networks have been researched, there are few implementations of the federated learning of Bayesian networks. In this thesis, a federated learning training environment for Bayesian neural networks using a public code base, Flower, is developed. With it is the exploration of state-of-the-art architecture, residual networks, and Bayesian versions of it. These architectures are then tested with independently and identically distributed (IID) datasets and non-IID datasets derived from the Dirichlet distribution. Results show that the MC Dropout version of Bayesian neural networks can achieve state-of-the-art results—91% accuracy—for IID partitions of the CIFAR10 dataset through federated learning. When the partitions are non-IID, federated learning through inverse variance aggregation of probabilistic weights does as well as its deterministic counterpart, with roughly 83% accuracy. This shows that Bayesian neural networks can be federated and achieve state-of-the-art results as well.Outstanding ThesisLieutenant, United States NavyApproved for public release. Distribution is unlimited

    Communication Size Reduction of Federated Learning using Neural ODE Models

    Full text link
    Federated learning is a machine learning approach in which data is not aggregated on a server, but is trained at clients locally, in consideration of security and privacy. ResNet is a classic but representative neural network that succeeds in deepening the neural network by learning a residual function that adds the inputs and outputs together. In federated learning, communication is performed between the server and clients to exchange weight parameters. Since ResNet has deep layers and a large number of parameters, the communication size becomes large. In this paper, we use Neural ODE as a lightweight model of ResNet to reduce communication size in federated learning. In addition, we newly introduce a flexible federated learning using Neural ODE models with different number of iterations, which correspond to ResNet models with different depths. Evaluation results using CIFAR-10 dataset show that the use of Neural ODE reduces communication size by up to 92.4% compared to ResNet. We also show that the proposed flexible federated learning can merge models with different iteration counts or depths

    FedForgery: Generalized Face Forgery Detection with Residual Federated Learning

    Full text link
    With the continuous development of deep learning in the field of image generation models, a large number of vivid forged faces have been generated and spread on the Internet. These high-authenticity artifacts could grow into a threat to society security. Existing face forgery detection methods directly utilize the obtained public shared or centralized data for training but ignore the personal privacy and security issues when personal data couldn't be centralizedly shared in real-world scenarios. Additionally, different distributions caused by diverse artifact types would further bring adverse influences on the forgery detection task. To solve the mentioned problems, the paper proposes a novel generalized residual Federated learning for face Forgery detection (FedForgery). The designed variational autoencoder aims to learn robust discriminative residual feature maps to detect forgery faces (with diverse or even unknown artifact types). Furthermore, the general federated learning strategy is introduced to construct distributed detection model trained collaboratively with multiple local decentralized devices, which could further boost the representation generalization. Experiments conducted on publicly available face forgery detection datasets prove the superior performance of the proposed FedForgery. The designed novel generalized face forgery detection protocols and source code would be publicly available.Comment: The code is available at https://github.com/GANG370/FedForgery. The paper has been accepted in the IEEE Transactions on Information Forensics & Securit

    Federated Neural Architecture Search

    Full text link
    To preserve user privacy while enabling mobile intelligence, techniques have been proposed to train deep neural networks on decentralized data. However, training over decentralized data makes the design of neural architecture quite difficult as it already was. Such difficulty is further amplified when designing and deploying different neural architectures for heterogeneous mobile platforms. In this work, we propose an automatic neural architecture search into the decentralized training, as a new DNN training paradigm called Federated Neural Architecture Search, namely federated NAS. To deal with the primary challenge of limited on-client computational and communication resources, we present FedNAS, a highly optimized framework for efficient federated NAS. FedNAS fully exploits the key opportunity of insufficient model candidate re-training during the architecture search process, and incorporates three key optimizations: parallel candidates training on partial clients, early dropping candidates with inferior performance, and dynamic round numbers. Tested on large-scale datasets and typical CNN architectures, FedNAS achieves comparable model accuracy as state-of-the-art NAS algorithm that trains models with centralized data, and also reduces the client cost by up to two orders of magnitude compared to a straightforward design of federated NAS
    • …
    corecore