17 research outputs found
A Practical Cross-Device Federated Learning Framework over 5G Networks
The concept of federated learning (FL) was first proposed by Google in 2016.
Thereafter, FL has been widely studied for the feasibility of application in
various fields due to its potential to make full use of data without
compromising the privacy. However, limited by the capacity of wireless data
transmission, the employment of federated learning on mobile devices has been
making slow progress in practical. The development and commercialization of the
5th generation (5G) mobile networks has shed some light on this. In this paper,
we analyze the challenges of existing federated learning schemes for mobile
devices and propose a novel cross-device federated learning framework, which
utilizes the anonymous communication technology and ring signature to protect
the privacy of participants while reducing the computation overhead of mobile
devices participating in FL. In addition, our scheme implements a
contribution-based incentive mechanism to encourage mobile users to participate
in FL. We also give a case study of autonomous driving. Finally, we present the
performance evaluation of the proposed scheme and discuss some open issues in
federated learning.Comment: This paper has been accepted by IEEE Wireless Communication
Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems
Explainable Artificial Intelligence (XAI) has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning (ML) and Deep Learning (DL) based algorithms. In this paper, we chose e-healthcare systems for efficient decision-making and data classification, especially in data security, data handling, diagnostics, laboratories, and decision-making. Federated Machine Learning (FML) is a new and advanced technology that helps to maintain privacy for Personal Health Records (PHR) and handle a large amount of medical data effectively. In this context, XAI, along with FML, increases efficiency and improves the security of e-healthcare systems. The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning (FL) platform. The experimental evaluation demonstrates the accuracy rate by taking epochs size 5, batch size 16, and the number of clients 5, which shows a higher accuracy rate (19, 104). We conclude the paper by discussing the existing gaps and future work in an e-healthcare system
5G Technology based Edge Computing in UAV Networks for Resource Allocation with Routing using Federated Learning Access Network and Trajectory Routing Protocol
UAVs (Unmanned aerial vehicles) are being utilised more frequently in wireless communication networks of the Beyond Fifth Generation (B5G) that are equipped with a high-computation paradigm and intelligent applications. Due to the growing number of IoT (Internet of Things) devices in smart environments, these networks have the potential to produce a sizeable volume of heterogeneous data.This research propose novel technique in UAV based edge computing resource allocation and routing by machine learning technique. here the UAV-enabled MEC method regarding emerging IoT applications as well as role of machine learning (ML) has been analysed. In this research the UAV assisted edge computing resource allocation has been carried out using Monte Carlo federated learning based access network. Then the routing through UAV network has been carried out using trajectory based deterministic reinforcement collaborative routing protocol.We specifically conduct an experimental investigation of the tradeoff between the communication cost and the computation of the two possible methodologies.The key findings show that, despite the longer connection latency, the computation offloading strategy enables us to give a significantly greater throughput than the edge computing approach
Incentivized Federated Learning and Unlearning
To protect users' right to be forgotten in federated learning, federated
unlearning aims at eliminating the impact of leaving users' data on the global
learned model. The current research in federated unlearning mainly concentrated
on developing effective and efficient unlearning techniques. However, the issue
of incentivizing valuable users to remain engaged and preventing their data
from being unlearned is still under-explored, yet important to the unlearned
model performance. This paper focuses on the incentive issue and develops an
incentive mechanism for federated learning and unlearning. We first
characterize the leaving users' impact on the global model accuracy and the
required communication rounds for unlearning. Building on these results, we
propose a four-stage game to capture the interaction and information updates
during the learning and unlearning process. A key contribution is to summarize
users' multi-dimensional private information into one-dimensional metrics to
guide the incentive design. We further investigate whether allowing federated
unlearning is beneficial to the server and users, compared to a scenario
without unlearning. Interestingly, users usually have a larger total payoff in
the scenario with higher costs, due to the server's excess incentives under
information asymmetry. The numerical results demonstrate the necessity of
unlearning incentives for retaining valuable leaving users, and also show that
our proposed mechanisms decrease the server's cost by up to 53.91\% compared to
state-of-the-art benchmarks
Stochastic Coded Federated Learning: Theoretical Analysis and Incentive Mechanism Design
Federated learning (FL) has achieved great success as a privacy-preserving
distributed training paradigm, where many edge devices collaboratively train a
machine learning model by sharing the model updates instead of the raw data
with a server. However, the heterogeneous computational and communication
resources of edge devices give rise to stragglers that significantly decelerate
the training process. To mitigate this issue, we propose a novel FL framework
named stochastic coded federated learning (SCFL) that leverages coded computing
techniques. In SCFL, before the training process starts, each edge device
uploads a privacy-preserving coded dataset to the server, which is generated by
adding Gaussian noise to the projected local dataset. During training, the
server computes gradients on the global coded dataset to compensate for the
missing model updates of the straggling devices. We design a gradient
aggregation scheme to ensure that the aggregated model update is an unbiased
estimate of the desired global update. Moreover, this aggregation scheme
enables periodical model averaging to improve the training efficiency. We
characterize the tradeoff between the convergence performance and privacy
guarantee of SCFL. In particular, a more noisy coded dataset provides stronger
privacy protection for edge devices but results in learning performance
degradation. We further develop a contract-based incentive mechanism to
coordinate such a conflict. The simulation results show that SCFL learns a
better model within the given time and achieves a better privacy-performance
tradeoff than the baseline methods. In addition, the proposed incentive
mechanism grants better training performance than the conventional Stackelberg
game approach