Recent research efforts indicate that federated learning (FL) systems are
vulnerable to a variety of security breaches. While numerous defense strategies
have been suggested, they are mainly designed to counter specific attack
patterns and lack adaptability, rendering them less effective when facing
uncertain or adaptive threats. This work models adversarial FL as a Bayesian
Stackelberg Markov game (BSMG) between the defender and the attacker to address
the lack of adaptability to uncertain adaptive attacks. We further devise an
effective meta-learning technique to solve for the Stackelberg equilibrium,
leading to a resilient and adaptable defense. The experiment results suggest
that our meta-Stackelberg learning approach excels in combating intense model
poisoning and backdoor attacks of indeterminate types