218 research outputs found
Dynamic Resource Management in Integrated NOMA Terrestrial-Satellite Networks using Multi-Agent Reinforcement Learning
This study introduces a resource allocation framework for integrated
satellite-terrestrial networks to address these challenges. The framework
leverages local cache pool deployments and non-orthogonal multiple access
(NOMA) to reduce time delays and improve energy efficiency. Our proposed
approach utilizes a multi-agent enabled deep deterministic policy gradient
algorithm (MADDPG) to optimize user association, cache design, and transmission
power control, resulting in enhanced energy efficiency. The approach comprises
two phases: User Association and Power Control, where users are treated as
agents, and Cache Optimization, where the satellite (Bs) is considered the
agent. Through extensive simulations, we demonstrate that our approach
surpasses conventional single-agent deep reinforcement learning algorithms in
addressing cache design and resource allocation challenges in integrated
terrestrial-satellite networks. Specifically, our proposed approach achieves
significantly higher energy efficiency and reduced time delays compared to
existing methods.Comment: 16, 1
A Survey on UAV-enabled Edge Computing: Resource Management Perspective
Edge computing facilitates low-latency services at the network's edge by
distributing computation, communication, and storage resources within the
geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent
advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new
opportunities for edge computing in military operations, disaster response, or
remote areas where traditional terrestrial networks are limited or unavailable.
In such environments, UAVs can be deployed as aerial edge servers or relays to
facilitate edge computing services. This form of computing is also known as
UAV-enabled Edge Computing (UEC), which offers several unique benefits such as
mobility, line-of-sight, flexibility, computational capability, and
cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices
are typically very limited in the context of UEC. Efficient resource management
is, therefore, a critical research challenge in UEC. In this article, we
present a survey on the existing research in UEC from the resource management
perspective. We identify a conceptual architecture, different types of
collaborations, wireless communication models, research directions, key
techniques and performance indicators for resource management in UEC. We also
present a taxonomy of resource management in UEC. Finally, we identify and
discuss some open research challenges that can stimulate future research
directions for resource management in UEC.Comment: 36 pages, Accepted to ACM CSU
Delay Constrained Resource Allocation for NOMA Enabled Satellite Internet of Things with Deep Reinforcement Learning
With the ever increasing requirement of transferring
data from/to smart users within a wide area, satellite internet of
things (S-IoT) networks has emerged as a promising paradigm
to provide cost-effective solution for remote and disaster areas.
Taking into account the diverse link qualities and delay qualityof-
service (QoS) requirements of S-IoT devices, we introduce a
power domain non-orthogonal multiple access (NOMA) scheme
in the downlink S-IoT networks to enhance resource utilization
efficiency and employ the concept of effective capacity
to show delay-QoS requirements of S-IoT traffics. Firstly, resource
allocation among NOMA users is formulated with the
aim of maximizing sum effective capacity of the S-IoT while
meeting the minimum capacity constraint of each user. Due to
the intractability and non-convexity of the initial optimization
problem, especially in the case of large-scale user-pair in NOMA
enabled S-IoT. This paper employs a deep reinforcement learning
(DRL) algorithm for dynamic resource allocation. Specifically,
channel conditions and/or delay-QoS requirements of NOMA
users are carefully selected as state according to exact closed-form
expressions as well as low-SNR and high-SNR approximations,
a deep Q network is first adopted to yet reward and output
the optimum power allocation coefficients for all users, and then
learn to adjust the allocation policy by updating the weights
of neural networks using gained experiences. Simulation results
are provided to demonstrate that with a proper discount factor,
reward design, and training mechanism, the proposed DRL
based power allocation scheme can output optimal/near-optimal
action in each time slot, and thus, provide superior performance
than that achieved with a fixed power allocation strategy and
orthogonal multiple access (OMA) scheme
An Optimized Multi-Layer Resource Management in Mobile Edge Computing Networks: A Joint Computation Offloading and Caching Solution
Nowadays, data caching is being used as a high-speed data storage layer in
mobile edge computing networks employing flow control methodologies at an
exponential rate. This study shows how to discover the best architecture for
backhaul networks with caching capability using a distributed offloading
technique. This article used a continuous power flow analysis to achieve the
optimum load constraints, wherein the power of macro base stations with various
caching capacities is supplied by either an intelligent grid network or
renewable energy systems. This work proposes ubiquitous connectivity between
users at the cell edge and offloading the macro cells so as to provide features
the macro cell itself cannot cope with, such as extreme changes in the required
user data rate and energy efficiency. The offloading framework is then reformed
into a neural weighted framework that considers convergence and Lyapunov
instability requirements of mobile-edge computing under Karush Kuhn Tucker
optimization restrictions in order to get accurate solutions. The cell-layer
performance is analyzed in the boundary and in the center point of the cells.
The analytical and simulation results show that the suggested method
outperforms other energy-saving techniques. Also, compared to other solutions
studied in the literature, the proposed approach shows a two to three times
increase in both the throughput of the cell edge users and the aggregate
throughput per cluster
Five Facets of 6G: Research Challenges and Opportunities
Whilst the fifth-generation (5G) systems are being rolled out across the
globe, researchers have turned their attention to the exploration of radical
next-generation solutions. At this early evolutionary stage we survey five main
research facets of this field, namely {\em Facet~1: next-generation
architectures, spectrum and services, Facet~2: next-generation networking,
Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing,
as well as Facet~5: applications of deep learning in 6G networks.} In this
paper, we have provided a critical appraisal of the literature of promising
techniques ranging from the associated architectures, networking, applications
as well as designs. We have portrayed a plethora of heterogeneous architectures
relying on cooperative hybrid networks supported by diverse access and
transmission mechanisms. The vulnerabilities of these techniques are also
addressed and carefully considered for highlighting the most of promising
future research directions. Additionally, we have listed a rich suite of
learning-driven optimization techniques. We conclude by observing the
evolutionary paradigm-shift that has taken place from pure single-component
bandwidth-efficiency, power-efficiency or delay-optimization towards
multi-component designs, as exemplified by the twin-component ultra-reliable
low-latency mode of the 5G system. We advocate a further evolutionary step
towards multi-component Pareto optimization, which requires the exploration of
the entire Pareto front of all optiomal solutions, where none of the components
of the objective function may be improved without degrading at least one of the
other components
A Survey on Applications of Cache-Aided NOMA
Contrary to orthogonal multiple-access (OMA), non-orthogonal multiple-access (NOMA) schemes can serve a pool of users without exploiting the scarce frequency or time domain resources. This is useful in meeting the future network requirements (5G and beyond systems), such as, low latency, massive connectivity, users' fairness, and high spectral efficiency. On the other hand, content caching restricts duplicate data transmission by storing popular contents in advance at the network edge which reduces data traffic. In this survey, we focus on cache-aided NOMA-based wireless networks which can reap the benefits of both cache and NOMA; switching to NOMA from OMA enables cache-aided networks to push additional files to content servers in parallel and improve the cache hit probability. Beginning with fundamentals of the cache-aided NOMA technology, we summarize the performance goals of cache-aided NOMA systems, present the associated design challenges, and categorize the recent related literature based on their application verticals. Concomitant standardization activities and open research challenges are highlighted as well
- …