2 research outputs found
FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning
Recent advancements in federated learning (FL) seek to increase client-level
performance by fine-tuning client parameters on local data or personalizing
architectures for the local task. Existing methods for such personalization
either prune a global model or fine-tune a global model on a local client
distribution. However, these existing methods either personalize at the expense
of retaining important global knowledge, or predetermine network layers for
fine-tuning, resulting in suboptimal storage of global knowledge within client
models. Enlightened by the lottery ticket hypothesis, we first introduce a
hypothesis for finding optimal client subnetworks to locally fine-tune while
leaving the rest of the parameters frozen. We then propose a novel FL
framework, FedSelect, using this procedure that directly personalizes both
client subnetwork structure and parameters, via the simultaneous discovery of
optimal parameters for personalization and the rest of parameters for global
aggregation during training. We show that this method achieves promising
results on CIFAR-10.Comment: We are still expanding this wor