Federated learning has been widely applied in autonomous driving since it
enables training a learning model among vehicles without sharing users' data.
However, data from autonomous vehicles usually suffer from the
non-independent-and-identically-distributed (non-IID) problem, which may cause
negative effects on the convergence of the learning process. In this paper, we
propose a new contrastive divergence loss to address the non-IID problem in
autonomous driving by reducing the impact of divergence factors from
transmitted models during the local learning process of each silo. We also
analyze the effects of contrastive divergence in various autonomous driving
scenarios, under multiple network infrastructures, and with different
centralized/distributed learning schemes. Our intensive experiments on three
datasets demonstrate that our proposed contrastive divergence loss further
improves the performance over current state-of-the-art approaches