20 research outputs found
Federated Deep Reinforcement Learning-based Bitrate Adaptation for Dynamic Adaptive Streaming over HTTP
In video streaming over HTTP, the bitrate adaptation selects the quality of
video chunks depending on the current network condition. Some previous works
have applied deep reinforcement learning (DRL) algorithms to determine the
chunk's bitrate from the observed states to maximize the quality-of-experience
(QoE). However, to build an intelligent model that can predict in various
environments, such as 3G, 4G, Wifi, \textit{etc.}, the states observed from
these environments must be sent to a server for training centrally. In this
work, we integrate federated learning (FL) to DRL-based rate adaptation to
train a model appropriate for different environments. The clients in the
proposed framework train their model locally and only update the weights to the
server. The simulations show that our federated DRL-based rate adaptations,
called FDRLABR with different DRL algorithms, such as deep Q-learning,
advantage actor-critic, and proximal policy optimization, yield better
performance than the traditional bitrate adaptation methods in various
environments.Comment: 13 pages, 1 colum
A New Look and Convergence Rate of Federated Multi-Task Learning with Laplacian Regularization
Non-Independent and Identically Distributed (non- IID) data distribution
among clients is considered as the key factor that degrades the performance of
federated learning (FL). Several approaches to handle non-IID data such as
personalized FL and federated multi-task learning (FMTL) are of great interest
to research communities. In this work, first, we formulate the FMTL problem
using Laplacian regularization to explicitly leverage the relationships among
the models of clients for multi-task learning. Then, we introduce a new view of
the FMTL problem, which in the first time shows that the formulated FMTL
problem can be used for conventional FL and personalized FL. We also propose
two algorithms FedU and dFedU to solve the formulated FMTL problem in
communication-centralized and decentralized schemes, respectively.
Theoretically, we prove that the convergence rates of both algorithms achieve
linear speedup for strongly convex and sublinear speedup of order 1/2 for
nonconvex objectives. Experimentally, we show that our algorithms outperform
the conventional algorithm FedAvg in FL settings, MOCHA in FMTL settings, as
well as pFedMe and Per-FedAvg in personalized FL settings
Channel Estimation in RIS-assisted Downlink Massive MIMO: A Learning-Based Approach
For downlink massive multiple-input multiple-output (MIMO) operating in
time-division duplex protocol, users can decode the signals effectively by only
utilizing the channel statistics as long as channel hardening holds. However,
in a reconfigurable intelligent surface (RIS)-assisted massive MIMO system, the
propagation channels may be less hardened due to the extra random fluctuations
of the effective channel gains. To address this issue, we propose a
learning-based method that trains a neural network to learn a mapping between
the received downlink signal and the effective channel gains. The proposed
method does not require any downlink pilots and statistical information of
interfering users. Numerical results show that, in terms of mean-square error
of the channel estimation, our proposed learning-based method outperforms the
state-of-the-art methods, especially when the light-of-sight (LoS) paths are
dominated by non-LoS paths with a low level of channel hardening, e.g., in the
cases of small numbers of RIS elements and/or base station antennas.Comment: accepted to appear in IEEE SPAWC'22, Oulu, Finlan
A New Look and Convergence Rate of Federated Multitask Learning With Laplacian Regularization
Non-independent and identically distributed (non-IID) data distribution among clients is considered as the key factor that degrades the performance of federated learning (FL). Several approaches to handle non-IID data, such as personalized FL and federated multitask learning (FMTL), are of great interest to research communities. In this work, first, we formulate the FMTL problem using Laplacian regularization to explicitly leverage the relationships among the models of clients for multitask learning. Then, we introduce a new view of the FMTL problem, which, for the first time, shows that the formulated FMTL problem can be used for conventional FL and personalized FL. We also propose two algorithms FedU and decentralized FedU (dFedU) to solve the formulated FMTL problem in communication-centralized and decentralized schemes, respectively. Theoretically, we prove that the convergence rates of both algorithms achieve linear speedup for strongly convex and sublinear speedup of order 1/2 for nonconvex objectives. Experimentally, we show that our algorithms outperform the conventional algorithm FedAvg, FedProx, SCAFFOLD, and AFL in FL settings, MOCHA in FMTL settings, as well as pFedMe and Per-FedAvg in personalized FL settings.Funding Agencies|FMJH Program Gaspard Monge for optimization and operations research</p
On the Generalization of Wasserstein Robust Federated Learning
In federated learning, participating clients typically possess non-i.i.d.
data, posing a significant challenge to generalization to unseen distributions.
To address this, we propose a Wasserstein distributionally robust optimization
scheme called WAFL. Leveraging its duality, we frame WAFL as an empirical
surrogate risk minimization problem, and solve it using a local SGD-based
algorithm with convergence guarantees. We show that the robustness of WAFL is
more general than related approaches, and the generalization bound is robust to
all adversarial distributions inside the Wasserstein ball (ambiguity set).
Since the center location and radius of the Wasserstein ball can be suitably
modified, WAFL shows its applicability not only in robustness but also in
domain adaptation. Through empirical evaluation, we demonstrate that WAFL
generalizes better than the vanilla FedAvg in non-i.i.d. settings, and is more
robust than other related methods in distribution shift settings. Further,
using benchmark datasets we show that WAFL is capable of generalizing to unseen
target domains
Federated Learning Over Wireless Networks: Convergence Analysis and Resource Allocation
There is an increasing interest in a fast-growing machine learning technique called Federated Learning (FL), in which the model training is distributed over mobile user equipment (UEs), exploiting UEs' local computation and training data. Despite its advantages such as preserving data privacy, FL still has challenges of heterogeneity across UEs' data and physical resources. To address these challenges, we first propose FEDL, a FL algorithm which can handle heterogeneous UE data without further assumptions except strongly convex and smooth loss functions. We provide a convergence rate characterizing the trade-off between local computation rounds of each UE to update its local model and global communication rounds to update the FL global model. We then employ FEDL in wireless networks as a resource allocation optimization problem that captures the trade-off between FEDL convergence wall clock time and energy consumption of UEs with heterogeneous computing and power resources. Even though the wireless resource allocation problem of FEDL is non-convex, we exploit this problem's structure to decompose it into three sub-problems and analyze their closed-form solutions as well as insights into problem design. Finally, we empirically evaluate the convergence of FEDL with PyTorch experiments, and provide extensive numerical results for the wireless resource allocation sub-problems. Experimental results show that FEDL outperforms the vanilla FedAvg algorithm in terms of convergence rate and test accuracy in various settings