4,576 research outputs found
Dynamic Resource Management in Integrated NOMA Terrestrial-Satellite Networks using Multi-Agent Reinforcement Learning
This study introduces a resource allocation framework for integrated
satellite-terrestrial networks to address these challenges. The framework
leverages local cache pool deployments and non-orthogonal multiple access
(NOMA) to reduce time delays and improve energy efficiency. Our proposed
approach utilizes a multi-agent enabled deep deterministic policy gradient
algorithm (MADDPG) to optimize user association, cache design, and transmission
power control, resulting in enhanced energy efficiency. The approach comprises
two phases: User Association and Power Control, where users are treated as
agents, and Cache Optimization, where the satellite (Bs) is considered the
agent. Through extensive simulations, we demonstrate that our approach
surpasses conventional single-agent deep reinforcement learning algorithms in
addressing cache design and resource allocation challenges in integrated
terrestrial-satellite networks. Specifically, our proposed approach achieves
significantly higher energy efficiency and reduced time delays compared to
existing methods.Comment: 16, 1
Revolutionizing Future Connectivity: A Contemporary Survey on AI-empowered Satellite-based Non-Terrestrial Networks in 6G
Non-Terrestrial Networks (NTN) are expected to be a critical component of 6th
Generation (6G) networks, providing ubiquitous, continuous, and scalable
services. Satellites emerge as the primary enabler for NTN, leveraging their
extensive coverage, stable orbits, scalability, and adherence to international
regulations. However, satellite-based NTN presents unique challenges, including
long propagation delay, high Doppler shift, frequent handovers, spectrum
sharing complexities, and intricate beam and resource allocation, among others.
The integration of NTNs into existing terrestrial networks in 6G introduces a
range of novel challenges, including task offloading, network routing, network
slicing, and many more. To tackle all these obstacles, this paper proposes
Artificial Intelligence (AI) as a promising solution, harnessing its ability to
capture intricate correlations among diverse network parameters. We begin by
providing a comprehensive background on NTN and AI, highlighting the potential
of AI techniques in addressing various NTN challenges. Next, we present an
overview of existing works, emphasizing AI as an enabling tool for
satellite-based NTN, and explore potential research directions. Furthermore, we
discuss ongoing research efforts that aim to enable AI in satellite-based NTN
through software-defined implementations, while also discussing the associated
challenges. Finally, we conclude by providing insights and recommendations for
enabling AI-driven satellite-based NTN in future 6G networks.Comment: 40 pages, 19 Figure, 10 Tables, Surve
SpaceRIS: LEO Satellite Coverage Maximization in 6G Sub-THz Networks by MAPPO DRL and Whale Optimization
Satellite systems face a significant challenge in effectively utilizing
limited communication resources to meet the demands of ground network traffic,
characterized by asymmetrical spatial distribution and time-varying
characteristics. Moreover, the coverage range and signal transmission distance
of low Earth orbit (LEO) satellites are restricted by notable propagation
attenuation, molecular absorption, and space losses in sub-terahertz (THz)
frequencies. This paper introduces a novel approach to maximize LEO satellite
coverage by leveraging reconfigurable intelligent surfaces (RISs) within 6G
sub-THz networks. The optimization objectives encompass enhancing the
end-to-end data rate, optimizing satellite-remote user equipment (RUE)
associations, data packet routing within satellite constellations, RIS phase
shift, and ground base station (GBS) transmit power (i.e., active beamforming).
The formulated joint optimization problem poses significant challenges owing to
its time-varying environment, non-convex characteristics, and NP-hard
complexity. To address these challenges, we propose a block coordinate descent
(BCD) algorithm that integrates balanced K-means clustering, multi-agent
proximal policy optimization (MAPPO) deep reinforcement learning (DRL), and
whale optimization (WOA) techniques. The performance of the proposed approach
is demonstrated through comprehensive simulation results, exhibiting its
superiority over existing baseline methods in the literature
Self-Evolving Integrated Vertical Heterogeneous Networks
6G and beyond networks tend towards fully intelligent and adaptive design in
order to provide better operational agility in maintaining universal wireless
access and supporting a wide range of services and use cases while dealing with
network complexity efficiently. Such enhanced network agility will require
developing a self-evolving capability in designing both the network
architecture and resource management to intelligently utilize resources, reduce
operational costs, and achieve the coveted quality of service (QoS). To enable
this capability, the necessity of considering an integrated vertical
heterogeneous network (VHetNet) architecture appears to be inevitable due to
its high inherent agility. Moreover, employing an intelligent framework is
another crucial requirement for self-evolving networks to deal with real-time
network optimization problems. Hence, in this work, to provide a better insight
on network architecture design in support of self-evolving networks, we
highlight the merits of integrated VHetNet architecture while proposing an
intelligent framework for self-evolving integrated vertical heterogeneous
networks (SEI-VHetNets). The impact of the challenges associated with
SEI-VHetNet architecture, on network management is also studied considering a
generalized network model. Furthermore, the current literature on network
management of integrated VHetNets along with the recent advancements in
artificial intelligence (AI)/machine learning (ML) solutions are discussed.
Accordingly, the core challenges of integrating AI/ML in SEI-VHetNets are
identified. Finally, the potential future research directions for advancing the
autonomous and self-evolving capabilities of SEI-VHetNets are discussed.Comment: 25 pages, 5 figures, 2 table
A Survey on UAV-Aided Maritime Communications: Deployment Considerations, Applications, and Future Challenges
Maritime activities represent a major domain of economic growth with several
emerging maritime Internet of Things use cases, such as smart ports, autonomous
navigation, and ocean monitoring systems. The major enabler for this exciting
ecosystem is the provision of broadband, low-delay, and reliable wireless
coverage to the ever-increasing number of vessels, buoys, platforms, sensors,
and actuators. Towards this end, the integration of unmanned aerial vehicles
(UAVs) in maritime communications introduces an aerial dimension to wireless
connectivity going above and beyond current deployments, which are mainly
relying on shore-based base stations with limited coverage and satellite links
with high latency. Considering the potential of UAV-aided wireless
communications, this survey presents the state-of-the-art in UAV-aided maritime
communications, which, in general, are based on both conventional optimization
and machine-learning-aided approaches. More specifically, relevant UAV-based
network architectures are discussed together with the role of their building
blocks. Then, physical-layer, resource management, and cloud/edge computing and
caching UAV-aided solutions in maritime environments are discussed and grouped
based on their performance targets. Moreover, as UAVs are characterized by
flexible deployment with high re-positioning capabilities, studies on UAV
trajectory optimization for maritime applications are thoroughly discussed. In
addition, aiming at shedding light on the current status of real-world
deployments, experimental studies on UAV-aided maritime communications are
presented and implementation details are given. Finally, several important open
issues in the area of UAV-aided maritime communications are given, related to
the integration of sixth generation (6G) advancements
Five Facets of 6G: Research Challenges and Opportunities
Whilst the fifth-generation (5G) systems are being rolled out across the
globe, researchers have turned their attention to the exploration of radical
next-generation solutions. At this early evolutionary stage we survey five main
research facets of this field, namely {\em Facet~1: next-generation
architectures, spectrum and services, Facet~2: next-generation networking,
Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing,
as well as Facet~5: applications of deep learning in 6G networks.} In this
paper, we have provided a critical appraisal of the literature of promising
techniques ranging from the associated architectures, networking, applications
as well as designs. We have portrayed a plethora of heterogeneous architectures
relying on cooperative hybrid networks supported by diverse access and
transmission mechanisms. The vulnerabilities of these techniques are also
addressed and carefully considered for highlighting the most of promising
future research directions. Additionally, we have listed a rich suite of
learning-driven optimization techniques. We conclude by observing the
evolutionary paradigm-shift that has taken place from pure single-component
bandwidth-efficiency, power-efficiency or delay-optimization towards
multi-component designs, as exemplified by the twin-component ultra-reliable
low-latency mode of the 5G system. We advocate a further evolutionary step
towards multi-component Pareto optimization, which requires the exploration of
the entire Pareto front of all optiomal solutions, where none of the components
of the objective function may be improved without degrading at least one of the
other components
- …