Federated Learning (FL) has been recently receiving increasing consideration
from the cybersecurity community as a way to collaboratively train deep
learning models with distributed profiles of cyberthreats, with no disclosure
of training data. Nevertheless, the adoption of FL in cybersecurity is still in
its infancy, and a range of practical aspects have not been properly addressed
yet. Indeed, the Federated Averaging algorithm at the core of the FL concept
requires the availability of test data to control the FL process. Although this
might be feasible in some domains, test network traffic of newly discovered
attacks cannot be always shared without disclosing sensitive information. In
this paper, we address the convergence of the FL process in dynamic
cybersecurity scenarios, where the trained model must be frequently updated
with new recent attack profiles to empower all members of the federation with
latest detection features. To this aim, we propose FLAD (adaptive Federated
Learning Approach to DDoS attack detection), a FL solution for cybersecurity
applications based on an adaptive mechanism that orchestrates the FL process by
dynamically assigning more computation to those members whose attacks profiles
are harder to learn, without the need of sharing any test data to monitor the
performance of the trained model. Using a recent dataset of DDoS attacks, we
demonstrate that FLAD outperforms the original FL algorithm in terms of
convergence time and accuracy across a range of unbalanced datasets of
heterogeneous DDoS attacks. We also show the robustness of our approach in a
realistic scenario, where we retrain the deep learning model multiple times to
introduce the profiles of new attacks on a pre-trained model