The applications concerning vehicular networks benefit from the vision of
beyond 5G and 6G technologies such as ultra-dense network topologies, low
latency, and high data rates. Vehicular networks have always faced data privacy
preservation concerns, which lead to the advent of distributed learning
techniques such as federated learning. Although federated learning has solved
data privacy preservation issues to some extent, the technique is quite
vulnerable to model inversion and model poisoning attacks. We assume that the
design of defense mechanism and attacks are two sides of the same coin.
Designing a method to reduce vulnerability requires the attack to be effective
and challenging with real-world implications. In this work, we propose
simulated poisoning and inversion network (SPIN) that leverages the
optimization approach for reconstructing data from a differential model trained
by a vehicular node and intercepted when transmitted to roadside unit (RSU). We
then train a generative adversarial network (GAN) to improve the generation of
data with each passing round and global update from the RSU, accordingly.
Evaluation results show the qualitative and quantitative effectiveness of the
proposed approach. The attack initiated by SPIN can reduce up to 22% accuracy
on publicly available datasets while just using a single attacker. We assume
that revealing the simulation of such attacks would help us find its defense
mechanism in an effective manner.Comment: 6 pages, 4 figure