Federated learning (FL) is a method to train model with distributed data from
numerous participants such as IoT devices. It inherently assumes a uniform
capacity among participants. However, participants have diverse computational
resources in practice due to different conditions such as different energy
budgets or executing parallel unrelated tasks. It is necessary to reduce the
computation overhead for participants with inefficient computational resources,
otherwise they would be unable to finish the full training process. To address
the computation heterogeneity, in this paper we propose a strategy for
estimating local models without computationally intensive iterations. Based on
it, we propose Computationally Customized Federated Learning (CCFL), which
allows each participant to determine whether to perform conventional local
training or model estimation in each round based on its current computational
resources. Both theoretical analysis and exhaustive experiments indicate that
CCFL has the same convergence rate as FedAvg without resource constraints.
Furthermore, CCFL can be viewed of a computation-efficient extension of FedAvg
that retains model performance while considerably reducing computation
overhead