1 research outputs found

    BePOCH: Improving federated learning performance in resource-constrained computing devices

    Get PDF
    Inference with trained machine learning models is now possible with small computing devices while only a few years ago it was run mostly in the cloud only. The recent technique of Federated Learning offers now a way to do also the training of the machine learning models on small devices by distributing the computing effort needed for the training over many distributed machines. But, the training on these low-capacity devices takes a long time and often consumes all the available CPU resource of the device. Therefore, for Federated Learning to be done by low-capacity devices in practical environments, the training process must not only target for the highest accuracy, but also on reducing the training time and the resource consumption. In this paper, we present an approach which uses a dynamic epoch parameter in the model training. We propose the BePOCH (Best Epoch) algorithm to identify what is the best number of epochs per training round in Federated Learning. We show in experiments with medical datasets how with the BePOCH suggested number of epochs, the training time and resource consumption decreases while keeping the level of accuracy. Thus, BePOCH makes machine learning model training on low-capacity devices more feasible and furthermore, decreases the overall resource consumption of the training process, which is an important asnect towards greener machine learning techniques.This work was partially funded by the Spanish Government under contracts PID2019-106774RB-C21, PCI2019-111850- 2 (DiPET CHIST-ERA), PCI2019-111851-2 (LeadingEdge CHIST-ERA), and the Generalitat de Catalunya as Consolidated Research Group 2017-SGR-990. Suport was given also by the Agency for Electronic Communications (AEK) of North Macedonia.Peer ReviewedPostprint (author's final draft
    corecore