103 research outputs found

    Leveraging Learning Metrics for Improved Federated Learning

    Full text link
    Currently in the federated setting, no learning schemes leverage the emerging research of explainable artificial intelligence (XAI) in particular the novel learning metrics that help determine how well a model is learning. One of these novel learning metrics is termed `Effective Rank' (ER) which measures the Shannon Entropy of the singular values of a matrix, thus enabling a metric determining how well a layer is mapping. By joining federated learning and the learning metric, effective rank, this work will \textbf{(1)} give the first federated learning metric aggregation method \textbf{(2)} show that effective rank is well-suited to federated problems by out-performing baseline Federated Averaging \cite{konevcny2016federated} and \textbf{(3)} develop a novel weight-aggregation scheme relying on effective rank.Comment: Bachelor's thesi

    Continual Local Training for Better Initialization of Federated Models

    Full text link
    Federated learning (FL) refers to the learning paradigm that trains machine learning models directly in the decentralized systems consisting of smart edge devices without transmitting the raw data, which avoids the heavy communication costs and privacy concerns. Given the typical heterogeneous data distributions in such situations, the popular FL algorithm \emph{Federated Averaging} (FedAvg) suffers from weight divergence and thus cannot achieve a competitive performance for the global model (denoted as the \emph{initial performance} in FL) compared to centralized methods. In this paper, we propose the local continual training strategy to address this problem. Importance weights are evaluated on a small proxy dataset on the central server and then used to constrain the local training. With this additional term, we alleviate the weight divergence and continually integrate the knowledge on different local clients into the global model, which ensures a better generalization ability. Experiments on various FL settings demonstrate that our method significantly improves the initial performance of federated models with few extra communication costs.Comment: This paper has been accepted to 2020 IEEE International Conference on Image Processing (ICIP 2020

    Privacy-preserving recommendation system using federated learning

    Get PDF
    Federated Learning is a form of distributed learning which leverages edge devices for training. It aims to preserve privacy by communicating users’ learning parameters and gradient updates to the global server during the training while keeping the actual data on the users’ devices. The training on global server is performed on these parameters instead of user data directly while fine tuning of the model can be done on client’s devices locally. However, federated learning is not without its shortcomings and in this thesis, we present an overview of the learning paradigm and propose a new federated recommender system framework that utilizes homomorphic encryption. This results in a slight decrease in accuracy metrics but leads to greatly increased user-privacy. We also show that performing computations on encrypted gradients barely affects the recommendation performance while ensuring a more secure means of communicating user gradients to and from the global server

    Federated Learning Framework for IID and Non-IID datasets of Medical Images

    Get PDF
    Advances have been made in the field of Machine Learning showing that it is an effective tool that can be used for solving real world problems. This success is hugely attributed to the availability of accessible data which is not the case for many fields such as healthcare, a primary reason being the issue of privacy. Federated Learning (FL) is a technique that can be used to overcome the limitation of availability of data at a central location and allows for training machine learning models on private data or data that cannot be directly accessed. It allows the use of data to be decoupled from the governance (or control) over data. In this paper, we present an easy-to-use framework that provides a complete pipeline to let researchers and end users train any model on image data from various sources in a federated manner. We also show a comparison in results between models trained in a federated fashion and models trained in a centralized fashion for Independent and Identically Distributed (IID) and non IID datasets. The Intracranial Brain Hemorrhage dataset and the Pneumonia Detection dataset provided by the Radiological Society of North America (RSNA) are used for validating the FL framework and comparative analysis

    FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization

    Full text link
    Federated Learning (FL) has been successfully adopted for distributed training and inference of large-scale Deep Neural Networks (DNNs). However, DNNs are characterized by an extremely large number of parameters, thus, yielding significant challenges in exchanging these parameters among distributed nodes and managing the memory. Although recent DNN compression methods (e.g., sparsification, pruning) tackle such challenges, they do not holistically consider an adaptively controlled reduction of parameter exchange while maintaining high accuracy levels. We, therefore, contribute with a novel FL framework (coined FedDIP), which combines (i) dynamic model pruning with error feedback to eliminate redundant information exchange, which contributes to significant performance improvement, with (ii) incremental regularization that can achieve \textit{extreme} sparsity of models. We provide convergence analysis of FedDIP and report on a comprehensive performance and comparative assessment against state-of-the-art methods using benchmark data sets and DNN models. Our results showcase that FedDIP not only controls the model sparsity but efficiently achieves similar or better performance compared to other model pruning methods adopting incremental regularization during distributed model training. The code is available at: https://github.com/EricLoong/feddip.Comment: Accepted for publication at ICDM 2023 (Full version in arxiv). The associated code is available at https://github.com/EricLoong/feddi

    Performance Optimization for Federated Person Re-identification via Benchmark Analysis

    Full text link
    Federated learning is a privacy-preserving machine learning technique that learns a shared model across decentralized clients. It can alleviate privacy concerns of personal re-identification, an important computer vision task. In this work, we implement federated learning to person re-identification (FedReID) and optimize its performance affected by statistical heterogeneity in the real-world scenario. We first construct a new benchmark to investigate the performance of FedReID. This benchmark consists of (1) nine datasets with different volumes sourced from different domains to simulate the heterogeneous situation in reality, (2) two federated scenarios, and (3) an enhanced federated algorithm for FedReID. The benchmark analysis shows that the client-edge-cloud architecture, represented by the federated-by-dataset scenario, has better performance than client-server architecture in FedReID. It also reveals the bottlenecks of FedReID under the real-world scenario, including poor performance of large datasets caused by unbalanced weights in model aggregation and challenges in convergence. Then we propose two optimization methods: (1) To address the unbalanced weight problem, we propose a new method to dynamically change the weights according to the scale of model changes in clients in each training round; (2) To facilitate convergence, we adopt knowledge distillation to refine the server model with knowledge generated from client models on a public dataset. Experiment results demonstrate that our strategies can achieve much better convergence with superior performance on all datasets. We believe that our work will inspire the community to further explore the implementation of federated learning on more computer vision tasks in real-world scenarios.Comment: ACMMM'2
    • …
    corecore