In the last decade, Federated Learning (FL) has gained relevance in training
collaborative models without sharing sensitive data. Since its birth,
Centralized FL (CFL) has been the most common approach in the literature, where
a central entity creates a global model. However, a centralized approach leads
to increased latency due to bottlenecks, heightened vulnerability to system
failures, and trustworthiness concerns affecting the entity responsible for the
global model creation. Decentralized Federated Learning (DFL) emerged to
address these concerns by promoting decentralized model aggregation and
minimizing reliance on centralized architectures. However, despite the work
done in DFL, the literature has not (i) studied the main aspects
differentiating DFL and CFL; (ii) analyzed DFL frameworks to create and
evaluate new solutions; and (iii) reviewed application scenarios using DFL.
Thus, this article identifies and analyzes the main fundamentals of DFL in
terms of federation architectures, topologies, communication mechanisms,
security approaches, and key performance indicators. Additionally, the paper at
hand explores existing mechanisms to optimize critical DFL fundamentals. Then,
the most relevant features of the current DFL frameworks are reviewed and
compared. After that, it analyzes the most used DFL application scenarios,
identifying solutions based on the fundamentals and frameworks previously
defined. Finally, the evolution of existing DFL solutions is studied to provide
a list of trends, lessons learned, and open challenges