11 research outputs found
SGDE: Secure Generative Data Exchange for Cross-Silo Federated Learning
Privacy regulation laws, such as GDPR, impose transparency and security as
design pillars for data processing algorithms. In this context, federated
learning is one of the most influential frameworks for privacy-preserving
distributed machine learning, achieving astounding results in many natural
language processing and computer vision tasks. Several federated learning
frameworks employ differential privacy to prevent private data leakage to
unauthorized parties and malicious attackers. Many studies, however, highlight
the vulnerabilities of standard federated learning to poisoning and inference,
thus raising concerns about potential risks for sensitive data. To address this
issue, we present SGDE, a generative data exchange protocol that improves user
security and machine learning performance in a cross-silo federation. The core
of SGDE is to share data generators with strong differential privacy guarantees
trained on private data instead of communicating explicit gradient information.
These generators synthesize an arbitrarily large amount of data that retain the
distinctive features of private samples but differ substantially. In this work,
SGDE is tested in a cross-silo federated network on images and tabular
datasets, exploiting beta-variational autoencoders as data generators. From the
results, the inclusion of SGDE turns out to improve task accuracy and fairness,
as well as resilience to the most influential attacks on federated learning
Deep Learning-based Target-To-User Association in Integrated Sensing and Communication Systems
In Integrated Sensing and Communication (ISAC) systems, matching the radar
targets with communication user equipments (UEs) is functional to several
communication tasks, such as proactive handover and beam prediction. In this
paper, we consider a radar-assisted communication system where a base station
(BS) is equipped with a multiple-input-multiple-output (MIMO) radar that has a
double aim: (i) associate vehicular radar targets to vehicular equipments (VEs)
in the communication beamspace and (ii) predict the beamforming vector for each
VE from radar data. The proposed target-to-user (T2U) association consists of
two stages. First, vehicular radar targets are detected from range-angle
images, and, for each, a beamforming vector is estimated. Then, the inferred
per-target beamforming vectors are matched with the ones utilized at the BS for
communication to perform target-to-user (T2U) association. Joint multi-target
detection and beam inference is obtained by modifying the you only look once
(YOLO) model, which is trained over simulated range-angle radar images.
Simulation results over different urban vehicular mobility scenarios show that
the proposed T2U method provides a probability of correct association that
increases with the size of the BS antenna array, highlighting the respective
increase of the separability of the VEs in the beamspace. Moreover, we show
that the modified YOLO architecture can effectively perform both beam
prediction and radar target detection, with similar performance in mean average
precision on the latter over different antenna array sizes
SGDE: Secure Generative Data Exchange for Cross-Silo Federated Learning
Privacy regulation laws, such as GDPR, impose transparency and security as design pillars for data processing algorithms. In this context, federated learning is one of the most influential frameworks for privacy-preserving distributed machine learning, achieving astounding results in many natural language processing and computer vision tasks. Several federated learning frameworks employ differential privacy to prevent private data leakage to unauthorized parties and malicious attackers. Many studies, however, highlight the vulnerabilities of standard federated learning to poisoning and inference, thus raising concerns about potential risks for sensitive data. To address this issue, we present SGDE, a generative data exchange protocol that improves user security and machine learning performance in a cross-silo federation. The core of SGDE is to share data generators with strong differential privacy guarantees trained on private data instead of communicating explicit gradient information. These generators synthesize an arbitrarily large amount of data that retain the distinctive features of private samples but differ substantially. In this work, SGDE is tested in a cross-silo federated network on images and tabular datasets, exploiting beta-variational autoencoders as data generators. From the results, the inclusion of SGDE turns out to improve task accuracy and fairness, as well as resilience to the most influential attacks on federated learning