4 research outputs found
AoI-Delay Tradeoff in Mobile Edge Caching: A Mixed-Order Drift-Plus-Penalty Algorithm
We consider a scheduling problem in a Mobile Edge Caching (MEC) network,
where a base station (BS) uploads messages from multiple source nodes (SNs) and
transmits them to mobile users (MUs) via downlinks, aiming to jointly optimize
the average service Age of Information (AoI) and service delay over MUs. This
problem is formulated as a difficult sequential decision making problem with
discrete-valued and linearly-constrained design variables. To solve this
problem, we first approximate its achievable region by characterizing its
superset and subset. The superset is derived based on the rate stability
theorem, while the subset is obtained using a novel stochastic policy. We also
validate that this subset is substantially identical to the achievable region
when the number of schedule resources is large. Additionally, we propose a
sufficient condition to check the existence of the solution to the problem.
Then, we propose the mixed-order drift-plus-penalty algorithm that uses a
dynamic programming (DP) method to optimize the summation over a linear and
quadratic Lyapunov drift and a penalty term, to handle the product term over
different queue backlogs in the objective function. Finally, by associating the
proposed algorithm with the stochastic policy, we demonstrate that it achieves
an versus tradeoff for the average AoI and average delay
Multi-Carrier NOMA-Empowered Wireless Federated Learning with Optimal Power and Bandwidth Allocation
Wireless federated learning (WFL) undergoes a communication bottleneck in
uplink, limiting the number of users that can upload their local models in each
global aggregation round. This paper presents a new multi-carrier
non-orthogonal multiple-access (MC-NOMA)-empowered WFL system under an adaptive
learning setting of Flexible Aggregation. Since a WFL round accommodates both
local model training and uploading for each user, the use of Flexible
Aggregation allows the users to train different numbers of iterations per
round, adapting to their channel conditions and computing resources. The key
idea is to use MC-NOMA to concurrently upload the local models of the users,
thereby extending the local model training times of the users and increasing
participating users. A new metric, namely, Weighted Global Proportion of
Trained Mini-batches (WGPTM), is analytically established to measure the
convergence of the new system. Another important aspect is that we maximize the
WGPTM to harness the convergence of the new system by jointly optimizing the
transmit powers and subchannel bandwidths. This nonconvex problem is converted
equivalently to a tractable convex problem and solved efficiently using
variable substitution and Cauchy's inequality. As corroborated experimentally
using a convolutional neural network and an 18-layer residential network, the
proposed MC-NOMA WFL can efficiently reduce communication delay, increase local
model training times, and accelerate the convergence by over 40%, compared to
its existing alternative.Comment: 33 pages, 16 figure
Federated Learning in Wireless Networks
Artificial intelligence (AI) is transitioning from a long development period into reality. Notable instances like AlphaGo, Tesla’s self-driving cars, and the recent innovation of ChatGPT stand as widely recognized exemplars of AI applications. These examples collectively enhance the quality of human life. An increasing number of AI applications are expected to integrate seamlessly into our daily lives, further enriching our experiences.
Although AI has demonstrated remarkable performance, it is accompanied by numerous challenges. At the forefront of AI’s advancement lies machine learning (ML), a cutting-edge technique that acquires knowledge by emulating the human brain’s cognitive processes. Like humans, ML requires a substantial amount of data to build its knowledge repository. Computational capabilities have surged in alignment with Moore’s law, leading to the realization of cloud computing services like Amazon AWS. Presently, we find ourselves in the era of the IoT, characterized by the ubiquitous presence of smartphones, smart speakers, and intelligent vehicles. This landscape facilitates decentralizing data processing tasks, shifting them from the cloud to local devices. At the same time, a growing emphasis on privacy protection has emerged, as individuals are increasingly concerned with sharing personal data with corporate giants such as Google and Meta. Federated learning (FL) is a new distributed machine learning paradigm. It fosters a scenario where clients collaborate by sharing learned models rather than raw data, thus safeguarding client data privacy while providing a collaborative and resilient model.
FL has promised to address privacy concerns. However, it still faces many challenges, particularly within wireless networks. Within the FL landscape, four main challenges stand out: high communication costs, system heterogeneity, statistical heterogeneity, and privacy and security. When many clients participate in the learning process, and the wireless communication resources remain constrained, accommodating all participating clients becomes very complex. The contemporary realm of deep learning relies on models encompassing millions and, in some cases, billions of parameters, exacerbating communication overhead when transmitting these parameters. The heterogeneity of the system manifests itself across device disparities, deployment scenarios, and connectivity capabilities. Simultaneously, statistical heterogeneity encompasses variations in data distribution and model composition. Furthermore, the distributed architecture makes FL susceptible to attacks inside and outside the system.
This dissertation presents a suite of algorithms designed to address the challenges effectively. Mew communication schemes are introduced, including Non-Orthogonal Multiple Access (NOMA), over-the-air computation, and approximate communication. These techniques are coupled with gradient compression, client scheduling, and power allocation, each significantly mitigating communication overhead. Implementing asynchronous FL is a suitable remedy to solve the intricate issue of system heterogeneity. Independent and identically distributed (IID) and non-IID data in statistical heterogeneity are considered in all scenarios. Finally, the aggregation of model updates and individual client model initialization collaboratively address security and privacy issues