15 research outputs found
Local or Global: Selective Knowledge Assimilation for Federated Learning with Limited Labels
Many existing FL methods assume clients with fully-labeled data, while in
realistic settings, clients have limited labels due to the expensive and
laborious process of labeling. Limited labeled local data of the clients often
leads to their local model having poor generalization abilities to their larger
unlabeled local data, such as having class-distribution mismatch with the
unlabeled data. As a result, clients may instead look to benefit from the
global model trained across clients to leverage their unlabeled data, but this
also becomes difficult due to data heterogeneity across clients. In our work,
we propose FedLabel where clients selectively choose the local or global model
to pseudo-label their unlabeled data depending on which is more of an expert of
the data. We further utilize both the local and global models' knowledge via
global-local consistency regularization which minimizes the divergence between
the two models' outputs when they have identical pseudo-labels for the
unlabeled data. Unlike other semi-supervised FL baselines, our method does not
require additional experts other than the local or global model, nor require
additional parameters to be communicated. We also do not assume any
server-labeled data or fully labeled clients. For both cross-device and
cross-silo settings, we show that FedLabel outperforms other semi-supervised FL
baselines by -, and even outperforms standard fully supervised FL
baselines ( labeled data) with only - of labeled data.Comment: To appear in the proceedings of ICCV 202
Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models
Foundation models (FMs) adapt well to specific domains or tasks with
fine-tuning, and federated learning (FL) enables the potential for
privacy-preserving fine-tuning of the FMs with on-device local data. For
federated fine-tuning of FMs, we consider the FMs with small to medium
parameter sizes of single digit billion at maximum, referred to as on-device
FMs (ODFMs) that can be deployed on devices for inference but can only be
fine-tuned with parameter efficient methods. In our work, we tackle the data
and system heterogeneity problem of federated fine-tuning of ODFMs by proposing
a novel method using heterogeneous low-rank approximations (LoRAs), namely
HetLoRA. First, we show that the naive approach of using homogeneous LoRA ranks
across devices face a trade-off between overfitting and slow convergence, and
thus propose HetLoRA, which allows heterogeneous ranks across client devices
and efficiently aggregates and distributes these heterogeneous LoRA modules. By
applying rank self-pruning locally and sparsity-weighted aggregation at the
server, HetLoRA combines the advantages of high and low-rank LoRAs, which
achieves improved convergence speed and final performance compared to
homogeneous LoRA. Furthermore, HetLoRA offers enhanced computation efficiency
compared to full fine-tuning, making it suitable for federated fine-tuning
across heterogeneous devices