1,795 research outputs found
Evolution of High Throughput Satellite Systems: Vision, Requirements, and Key Technologies
High throughput satellites (HTS), with their digital payload technology, are
expected to play a key role as enablers of the upcoming 6G networks. HTS are
mainly designed to provide higher data rates and capacities. Fueled by
technological advancements including beamforming, advanced modulation
techniques, reconfigurable phased array technologies, and electronically
steerable antennas, HTS have emerged as a fundamental component for future
network generation. This paper offers a comprehensive state-of-the-art of HTS
systems, with a focus on standardization, patents, channel multiple access
techniques, routing, load balancing, and the role of software-defined
networking (SDN). In addition, we provide a vision for next-satellite systems
that we named as extremely-HTS (EHTS) toward autonomous satellites supported by
the main requirements and key technologies expected for these systems. The EHTS
system will be designed such that it maximizes spectrum reuse and data rates,
and flexibly steers the capacity to satisfy user demand. We introduce a novel
architecture for future regenerative payloads while summarizing the challenges
imposed by this architecture
A Survey on UAV-Aided Maritime Communications: Deployment Considerations, Applications, and Future Challenges
Maritime activities represent a major domain of economic growth with several
emerging maritime Internet of Things use cases, such as smart ports, autonomous
navigation, and ocean monitoring systems. The major enabler for this exciting
ecosystem is the provision of broadband, low-delay, and reliable wireless
coverage to the ever-increasing number of vessels, buoys, platforms, sensors,
and actuators. Towards this end, the integration of unmanned aerial vehicles
(UAVs) in maritime communications introduces an aerial dimension to wireless
connectivity going above and beyond current deployments, which are mainly
relying on shore-based base stations with limited coverage and satellite links
with high latency. Considering the potential of UAV-aided wireless
communications, this survey presents the state-of-the-art in UAV-aided maritime
communications, which, in general, are based on both conventional optimization
and machine-learning-aided approaches. More specifically, relevant UAV-based
network architectures are discussed together with the role of their building
blocks. Then, physical-layer, resource management, and cloud/edge computing and
caching UAV-aided solutions in maritime environments are discussed and grouped
based on their performance targets. Moreover, as UAVs are characterized by
flexible deployment with high re-positioning capabilities, studies on UAV
trajectory optimization for maritime applications are thoroughly discussed. In
addition, aiming at shedding light on the current status of real-world
deployments, experimental studies on UAV-aided maritime communications are
presented and implementation details are given. Finally, several important open
issues in the area of UAV-aided maritime communications are given, related to
the integration of sixth generation (6G) advancements
Supporting UAVs with Edge Computing: A Review of Opportunities and Challenges
Over the last years, Unmanned Aerial Vehicles (UAVs) have seen significant
advancements in sensor capabilities and computational abilities, allowing for
efficient autonomous navigation and visual tracking applications. However, the
demand for computationally complex tasks has increased faster than advances in
battery technology. This opens up possibilities for improvements using edge
computing. In edge computing, edge servers can achieve lower latency responses
compared to traditional cloud servers through strategic geographic deployments.
Furthermore, these servers can maintain superior computational performance
compared to UAVs, as they are not limited by battery constraints. Combining
these technologies by aiding UAVs with edge servers, research finds measurable
improvements in task completion speed, energy efficiency, and reliability
across multiple applications and industries. This systematic literature review
aims to analyze the current state of research and collect, select, and extract
the key areas where UAV activities can be supported and improved through edge
computing
On the Road to 6G: Visions, Requirements, Key Technologies and Testbeds
Fifth generation (5G) mobile communication systems have entered the stage of commercial development, providing users with new services and improved user experiences as well as offering a host of novel opportunities to various industries. However, 5G still faces many challenges. To address these challenges, international industrial, academic, and standards organizations have commenced research on sixth generation (6G) wireless communication systems. A series of white papers and survey papers have been published, which aim to define 6G in terms of requirements, application scenarios, key technologies, etc. Although ITU-R has been working on the 6G vision and it is expected to reach a consensus on what 6G will be by mid-2023, the related global discussions are still wide open and the existing literature has identified numerous open issues. This paper first provides a comprehensive portrayal of the 6G vision, technical requirements, and application scenarios, covering the current common understanding of 6G. Then, a critical appraisal of the 6G network architecture and key technologies is presented. Furthermore, existing testbeds and advanced 6G verification platforms are detailed for the first time. In addition, future research directions and open challenges are identified for stimulating the on-going global debate. Finally, lessons learned to date concerning 6G networks are discussed
A Survey on UAV-enabled Edge Computing: Resource Management Perspective
Edge computing facilitates low-latency services at the network's edge by
distributing computation, communication, and storage resources within the
geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent
advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new
opportunities for edge computing in military operations, disaster response, or
remote areas where traditional terrestrial networks are limited or unavailable.
In such environments, UAVs can be deployed as aerial edge servers or relays to
facilitate edge computing services. This form of computing is also known as
UAV-enabled Edge Computing (UEC), which offers several unique benefits such as
mobility, line-of-sight, flexibility, computational capability, and
cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices
are typically very limited in the context of UEC. Efficient resource management
is, therefore, a critical research challenge in UEC. In this article, we
present a survey on the existing research in UEC from the resource management
perspective. We identify a conceptual architecture, different types of
collaborations, wireless communication models, research directions, key
techniques and performance indicators for resource management in UEC. We also
present a taxonomy of resource management in UEC. Finally, we identify and
discuss some open research challenges that can stimulate future research
directions for resource management in UEC.Comment: 36 pages, Accepted to ACM CSU
Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey
The Internet of Underwater Things (IoUT) is an emerging communication
ecosystem developed for connecting underwater objects in maritime and
underwater environments. The IoUT technology is intricately linked with
intelligent boats and ships, smart shores and oceans, automatic marine
transportations, positioning and navigation, underwater exploration, disaster
prediction and prevention, as well as with intelligent monitoring and security.
The IoUT has an influence at various scales ranging from a small scientific
observatory, to a midsized harbor, and to covering global oceanic trade. The
network architecture of IoUT is intrinsically heterogeneous and should be
sufficiently resilient to operate in harsh environments. This creates major
challenges in terms of underwater communications, whilst relying on limited
energy resources. Additionally, the volume, velocity, and variety of data
produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise
to the concept of Big Marine Data (BMD), which has its own processing
challenges. Hence, conventional data processing techniques will falter, and
bespoke Machine Learning (ML) solutions have to be employed for automatically
learning the specific BMD behavior and features facilitating knowledge
extraction and decision support. The motivation of this paper is to
comprehensively survey the IoUT, BMD, and their synthesis. It also aims for
exploring the nexus of BMD with ML. We set out from underwater data collection
and then discuss the family of IoUT data communication techniques with an
emphasis on the state-of-the-art research challenges. We then review the suite
of ML solutions suitable for BMD handling and analytics. We treat the subject
deductively from an educational perspective, critically appraising the material
surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys &
Tutorials, peer-reviewed academic journa
Dynamic Resource Management in Integrated NOMA Terrestrial-Satellite Networks using Multi-Agent Reinforcement Learning
This study introduces a resource allocation framework for integrated
satellite-terrestrial networks to address these challenges. The framework
leverages local cache pool deployments and non-orthogonal multiple access
(NOMA) to reduce time delays and improve energy efficiency. Our proposed
approach utilizes a multi-agent enabled deep deterministic policy gradient
algorithm (MADDPG) to optimize user association, cache design, and transmission
power control, resulting in enhanced energy efficiency. The approach comprises
two phases: User Association and Power Control, where users are treated as
agents, and Cache Optimization, where the satellite (Bs) is considered the
agent. Through extensive simulations, we demonstrate that our approach
surpasses conventional single-agent deep reinforcement learning algorithms in
addressing cache design and resource allocation challenges in integrated
terrestrial-satellite networks. Specifically, our proposed approach achieves
significantly higher energy efficiency and reduced time delays compared to
existing methods.Comment: 16, 1
Multi-objective resource optimization in space-aerial-ground-sea integrated networks
Space-air-ground-sea integrated (SAGSI) networks are envisioned to connect satellite, aerial, ground,
and sea networks to provide connectivity everywhere and all the time in sixth-generation (6G) networks. However, the success of SAGSI networks is constrained by several challenges including
resource optimization when the users have diverse requirements and applications. We present a
comprehensive review of SAGSI networks from a resource optimization perspective. We discuss
use case scenarios and possible applications of SAGSI networks. The resource optimization discussion considers the challenges associated with SAGSI networks. In our review, we categorized
resource optimization techniques based on throughput and capacity maximization, delay minimization, energy consumption, task offloading, task scheduling, resource allocation or utilization,
network operation cost, outage probability, and the average age of information, joint optimization (data rate difference, storage or caching, CPU cycle frequency), the overall performance of
network and performance degradation, software-defined networking, and intelligent surveillance
and relay communication. We then formulate a mathematical framework for maximizing energy
efficiency, resource utilization, and user association. We optimize user association while satisfying
the constraints of transmit power, data rate, and user association with priority. The binary decision
variable is used to associate users with system resources. Since the decision variable is binary and
constraints are linear, the formulated problem is a binary linear programming problem. Based on
our formulated framework, we simulate and analyze the performance of three different algorithms
(branch and bound algorithm, interior point method, and barrier simplex algorithm) and compare
the results. Simulation results show that the branch and bound algorithm shows the best results,
so this is our benchmark algorithm. The complexity of branch and bound increases exponentially
as the number of users and stations increases in the SAGSI network. We got comparable results
for the interior point method and barrier simplex algorithm to the benchmark algorithm with low
complexity. Finally, we discuss future research directions and challenges of resource optimization
in SAGSI networks
- …