[EN] Federated Learning (FL) advances privacypreserving
machine learning by decentralizing model training
to individual devices, ensuring data remains localized. However,
FL is not immune to privacy threats. This paper investigates the
specific risk of property inference attacks, where an adversary
infers whether a certain property is present in the training data,
despite this property not being the focus of the global model.
Through experiments with TensorFlow Federated, we replicate
and extend previous findings on property inference attacks,
validating their reproducibility, credibility and robustness. Our
analysis reveals two key insights: property inference attacks
are highly effective during the initial training rounds due to
significant early updates, and the rarity of the property in
the dataset enhances attack effectiveness due to greater weight
contrast. Our results demonstrate that even without adversarial
modifications, significant privacy risks exist in FL systems,
highlighting the need for enhanced security measures to protect
sensitive information in FL environments.DANGER Strategic Project of Cybersecurity C062/23 and the ARTEMISA International Chair of Cybersecurity, funded by the Spanish National Institute of Cybersecurity through the European Union – NextGeneration EU and the Recovery, Transformation and Resilience Pla