753 research outputs found
Trustworthy Federated Learning: A Survey
Federated Learning (FL) has emerged as a significant advancement in the field
of Artificial Intelligence (AI), enabling collaborative model training across
distributed devices while maintaining data privacy. As the importance of FL
increases, addressing trustworthiness issues in its various aspects becomes
crucial. In this survey, we provide an extensive overview of the current state
of Trustworthy FL, exploring existing solutions and well-defined pillars
relevant to Trustworthy . Despite the growth in literature on trustworthy
centralized Machine Learning (ML)/Deep Learning (DL), further efforts are
necessary to identify trustworthiness pillars and evaluation metrics specific
to FL models, as well as to develop solutions for computing trustworthiness
levels. We propose a taxonomy that encompasses three main pillars:
Interpretability, Fairness, and Security & Privacy. Each pillar represents a
dimension of trust, further broken down into different notions. Our survey
covers trustworthiness challenges at every level in FL settings. We present a
comprehensive architecture of Trustworthy FL, addressing the fundamental
principles underlying the concept, and offer an in-depth analysis of trust
assessment mechanisms. In conclusion, we identify key research challenges
related to every aspect of Trustworthy FL and suggest future research
directions. This comprehensive survey serves as a valuable resource for
researchers and practitioners working on the development and implementation of
Trustworthy FL systems, contributing to a more secure and reliable AI
landscape.Comment: 45 Pages, 8 Figures, 9 Table
Federated and Transfer Learning: A Survey on Adversaries and Defense Mechanisms
The advent of federated learning has facilitated large-scale data exchange
amongst machine learning models while maintaining privacy. Despite its brief
history, federated learning is rapidly evolving to make wider use more
practical. One of the most significant advancements in this domain is the
incorporation of transfer learning into federated learning, which overcomes
fundamental constraints of primary federated learning, particularly in terms of
security. This chapter performs a comprehensive survey on the intersection of
federated and transfer learning from a security point of view. The main goal of
this study is to uncover potential vulnerabilities and defense mechanisms that
might compromise the privacy and performance of systems that use federated and
transfer learning.Comment: Accepted for publication in edited book titled "Federated and
Transfer Learning", Springer, Cha
Data Valuation for Vertical Federated Learning: A Model-free and Privacy-preserving Method
Vertical Federated learning (VFL) is a promising paradigm for predictive
analytics, empowering an organization (i.e., task party) to enhance its
predictive models through collaborations with multiple data suppliers (i.e.,
data parties) in a decentralized and privacy-preserving way. Despite the
fast-growing interest in VFL, the lack of effective and secure tools for
assessing the value of data owned by data parties hinders the application of
VFL in business contexts. In response, we propose FedValue, a
privacy-preserving, task-specific but model-free data valuation method for VFL,
which consists of a data valuation metric and a federated computation method.
Specifically, we first introduce a novel data valuation metric, namely
MShapley-CMI. The metric evaluates a data party's contribution to a predictive
analytics task without the need of executing a machine learning model, making
it well-suited for real-world applications of VFL. Next, we develop an
innovative federated computation method that calculates the MShapley-CMI value
for each data party in a privacy-preserving manner. Extensive experiments
conducted on six public datasets validate the efficacy of FedValue for data
valuation in the context of VFL. In addition, we illustrate the practical
utility of FedValue with a case study involving federated movie
recommendations
Privacy-Preserving Data Falsification Detection in Smart Grids using Elliptic Curve Cryptography and Homomorphic Encryption
In an advanced metering infrastructure (AMI), the electric utility collects power consumption data from smart meters to improve energy optimization and provides detailed information on power consumption to electric utility customers. However, AMI is vulnerable to data falsification attacks, which organized adversaries can launch. Such attacks can be detected by analyzing customers\u27 fine-grained power consumption data; however, analyzing customers\u27 private data violates the customers\u27 privacy. Although homomorphic encryption-based schemes have been proposed to tackle the problem, the disadvantage is a long execution time. This paper proposes a new privacy-preserving data falsification detection scheme to shorten the execution time. We adopt elliptic curve cryptography (ECC) based on homomorphic encryption (HE) without revealing customer power consumption data. HE is a form of encryption that permits users to perform computations on the encrypted data without decryption. Through ECC, we can achieve light computation. Our experimental evaluation showed that our proposed scheme successfully achieved 18 times faster than the CKKS scheme, a common HE scheme
Federated Machine Learning
In recent times, machine gaining knowledge has transformed areas such as processer visualisation, morphological and speech identification and processing. The implementation of machine learning is frim built on data and gathering the data in confidentiality disturbing circumstances. The studying of amalgamated systems and methods is an innovative area of modern technological field that facilitates the training within models without gathering the information. As an alternative to transferring the information, clients co-operate together to train a model be only delivering weights updates to the server. While this concerning privacy is better and more adaptable in some circumstances very expensive.
This thesis generally introduces some of the fundamental theories, structural design and procedures of federated machine learning and its prospective in numerous applications. Some optimisation methods and some privacy ensuring systems like differential privacy also reviewed
SAFEFL: MPC-friendly Framework for Private and Robust Federated Learning
Federated learning (FL) has gained widespread popularity in a variety of industries due to its ability to locally train models on devices while preserving privacy. However, FL systems are susceptible to i) privacy inference attacks and ii) poisoning attacks, which can compromise the system by corrupt actors. Despite a significant amount of work being done to tackle these attacks individually, the combination of these two attacks has received limited attention in the research community.
To address this gap, we introduce SAFEFL, a secure multiparty computation (MPC)-based framework designed to assess the efficacy of FL techniques in addressing both privacy inference and poisoning attacks. The heart of the SAFEFL framework is a communicator interface that enables PyTorch-based implementations to utilize the well established MP-SPDZ framework, which implements various MPC protocols. The goal of SAFEFL is to facilitate the development of more efficient FL systems that can effectively address privacy inference and poisoning attacks
- …