6,750 research outputs found
FedCIP: Federated Client Intellectual Property Protection with Traitor Tracking
Federated learning is an emerging privacy-preserving distributed machine
learning that enables multiple parties to collaboratively learn a shared model
while keeping each party's data private. However, federated learning faces two
main problems: semi-honest server privacy inference attacks and malicious
client-side model theft. To address privacy inference attacks, parameter-based
encrypted federated learning secure aggregation can be used. To address model
theft, a watermark-based intellectual property protection scheme can verify
model ownership. Although watermark-based intellectual property protection
schemes can help verify model ownership, they are not sufficient to address the
issue of continuous model theft by uncaught malicious clients in federated
learning. Existing IP protection schemes that have the ability to track
traitors are also not compatible with federated learning security aggregation.
Thus, in this paper, we propose a Federated Client-side Intellectual Property
Protection (FedCIP), which is compatible with federated learning security
aggregation and has the ability to track traitors. To the best of our
knowledge, this is the first IP protection scheme in federated learning that is
compatible with secure aggregation and tracking capabilities
Federated Learning in Computer Vision
Federated Learning (FL) has recently emerged as a novel machine learning paradigm allowing to preserve privacy and to account for the distributed nature of the learning process in many real-world settings. Computer vision tasks deal with huge datasets often with critical privacy issues, therefore many federated learning approaches have been presented to exploit its distributed and privacy-preserving nature. Firstly, this paper introduces the different FL settings used in computer vision and the main challenges that need to be tackled. Then, it provides a comprehensive overview of the different strategies used for FL in vision applications and presents several different approaches for image classification, object detection, semantic segmentation and for focused settings in face recognition and medical imaging. For the various approaches the considered FL setting, the employed data and methodologies and the achieved results are thoroughly discussed
SGDE: Secure Generative Data Exchange for Cross-Silo Federated Learning
Privacy regulation laws, such as GDPR, impose transparency and security as
design pillars for data processing algorithms. In this context, federated
learning is one of the most influential frameworks for privacy-preserving
distributed machine learning, achieving astounding results in many natural
language processing and computer vision tasks. Several federated learning
frameworks employ differential privacy to prevent private data leakage to
unauthorized parties and malicious attackers. Many studies, however, highlight
the vulnerabilities of standard federated learning to poisoning and inference,
thus raising concerns about potential risks for sensitive data. To address this
issue, we present SGDE, a generative data exchange protocol that improves user
security and machine learning performance in a cross-silo federation. The core
of SGDE is to share data generators with strong differential privacy guarantees
trained on private data instead of communicating explicit gradient information.
These generators synthesize an arbitrarily large amount of data that retain the
distinctive features of private samples but differ substantially. In this work,
SGDE is tested in a cross-silo federated network on images and tabular
datasets, exploiting beta-variational autoencoders as data generators. From the
results, the inclusion of SGDE turns out to improve task accuracy and fairness,
as well as resilience to the most influential attacks on federated learning
A Comparative Evaluation of FedAvg and Per-FedAvg Algorithms for Dirichlet Distributed Heterogeneous Data
In this paper, we investigate Federated Learning (FL), a paradigm of machine
learning that allows for decentralized model training on devices without
sharing raw data, there by preserving data privacy. In particular, we compare
two strategies within this paradigm: Federated Averaging (FedAvg) and
Personalized Federated Averaging (Per-FedAvg), focusing on their performance
with Non-Identically and Independently Distributed (Non-IID) data. Our analysis
shows that the level of data heterogeneity, modeled using a Dirichlet
distribution, significantly affects the performance of both strategies, with
Per-FedAvg showing superior robustness in conditions of high heterogeneity. Our
results provide insights into the development of more effective and efficient
machine learning strategies in a decentralized setting.Comment: 6 pages, 5 figures, conferenc
Over-the-Air Federated Learning In Broadband Communication
Federated learning (FL) is a privacy-preserving distributed machine learning
paradigm that operates at the wireless edge. It enables clients to collaborate
on model training while keeping their data private from adversaries and the
central server. However, current FL approaches have limitations. Some rely on
secure multiparty computation, which can be vulnerable to inference attacks.
Others employ differential privacy, but this may lead to decreased test
accuracy when dealing with a large number of parties contributing small amounts
of data. To address these issues, this paper proposes a novel approach that
integrates federated learning seamlessly into the inner workings of MIMO
(Multiple-Input Multiple-Output) systems
PrivacyFL: A simulator for privacy-preserving and secure federated learning
Federated learning is a technique that enables distributed clients to
collaboratively learn a shared machine learning model while keeping their
training data localized. This reduces data privacy risks, however, privacy
concerns still exist since it is possible to leak information about the
training dataset from the trained model's weights or parameters. Setting up a
federated learning environment, especially with security and privacy
guarantees, is a time-consuming process with numerous configurations and
parameters that can be manipulated. In order to help clients ensure that
collaboration is feasible and to check that it improves their model accuracy, a
real-world simulator for privacy-preserving and secure federated learning is
required. In this paper, we introduce PrivacyFL, which is an extensible, easily
configurable and scalable simulator for federated learning environments. Its
key features include latency simulation, robustness to client departure,
support for both centralized and decentralized learning, and configurable
privacy and security mechanisms based on differential privacy and secure
multiparty computation. In this paper, we motivate our research, describe the
architecture of the simulator and associated protocols, and discuss its
evaluation in numerous scenarios that highlight its wide range of functionality
and its advantages. Our paper addresses a significant real-world problem:
checking the feasibility of participating in a federated learning environment
under a variety of circumstances. It also has a strong practical impact because
organizations such as hospitals, banks, and research institutes, which have
large amounts of sensitive data and would like to collaborate, would greatly
benefit from having a system that enables them to do so in a privacy-preserving
and secure manner
Over-the-Air Federated Learning in Satellite systems
Federated learning in satellites offers several advantages. Firstly, it
ensures data privacy and security, as sensitive data remains on the satellites
and is not transmitted to a central location. This is particularly important
when dealing with sensitive or classified information. Secondly, federated
learning allows satellites to collectively learn from a diverse set of data
sources, benefiting from the distributed knowledge across the satellite
network. Lastly, the use of federated learning reduces the communication
bandwidth requirements between satellites and the central server, as only model
updates are exchanged instead of raw data. By leveraging federated learning,
satellites can collaborate and continuously improve their machine learning
models while preserving data privacy and minimizing communication overhead.
This enables the development of more intelligent and efficient satellite
systems for various applications, such as Earth observation, weather
forecasting, and space exploration.Comment: arXiv admin note: text overlap with arXiv:2211.0157
- …