Federated learning (FL) is a distributed machine learning paradigm in which a
large number of clients coordinate with a central server to learn a model
without sharing their own training data. One central server is not enough, due
to problems of connectivity with clients. In this paper, a decentralized
federated learning (DFL) model with the stochastic gradient descent (SGD)
algorithm has been introduced, as a more scalable approach to improve the
learning performance in a network of agents with arbitrary topology. Three
scheduling policies for DFL have been proposed for communications between the
clients and the parallel servers, and the convergence, accuracy, and loss have
been tested in a totally decentralized mplementation of SGD. The experimental
results show that the proposed scheduling polices have an impact both on the
speed of convergence and in the final global model.Comment: 32nd International Conference on Computer Theory and Applications
(ICCTA), Alexandria, Egypt, 202