201 research outputs found
Applicability and Challenges of Deep Reinforcement Learning for Satellite Frequency Plan Design
The study and benchmarking of Deep Reinforcement Learning (DRL) models has
become a trend in many industries, including aerospace engineering and
communications. Recent studies in these fields propose these kinds of models to
address certain complex real-time decision-making problems in which classic
approaches do not meet time requirements or fail to obtain optimal solutions.
While the good performance of DRL models has been proved for specific use cases
or scenarios, most studies do not discuss the compromises and generalizability
of such models during real operations. In this paper we explore the tradeoffs
of different elements of DRL models and how they might impact the final
performance. To that end, we choose the Frequency Plan Design (FPD) problem in
the context of multibeam satellite constellations as our use case and propose a
DRL model to address it. We identify 6 different core elements that have a
major effect in its performance: the policy, the policy optimizer, the state,
action, and reward representations, and the training environment. We analyze
different alternatives for each of these elements and characterize their
effect. We also use multiple environments to account for different scenarios in
which we vary the dimensionality or make the environment nonstationary. Our
findings show that DRL is a potential method to address the FPD problem in real
operations, especially because of its speed in decision-making. However, no
single DRL model is able to outperform the rest in all scenarios, and the best
approach for each of the 6 core elements depends on the features of the
operation environment. While we agree on the potential of DRL to solve future
complex problems in the aerospace industry, we also reflect on the importance
of designing appropriate models and training procedures, understanding the
applicability of such models, and reporting the main performance tradeoffs
Evolution of High Throughput Satellite Systems: Vision, Requirements, and Key Technologies
High throughput satellites (HTS), with their digital payload technology, are
expected to play a key role as enablers of the upcoming 6G networks. HTS are
mainly designed to provide higher data rates and capacities. Fueled by
technological advancements including beamforming, advanced modulation
techniques, reconfigurable phased array technologies, and electronically
steerable antennas, HTS have emerged as a fundamental component for future
network generation. This paper offers a comprehensive state-of-the-art of HTS
systems, with a focus on standardization, patents, channel multiple access
techniques, routing, load balancing, and the role of software-defined
networking (SDN). In addition, we provide a vision for next-satellite systems
that we named as extremely-HTS (EHTS) toward autonomous satellites supported by
the main requirements and key technologies expected for these systems. The EHTS
system will be designed such that it maximizes spectrum reuse and data rates,
and flexibly steers the capacity to satisfy user demand. We introduce a novel
architecture for future regenerative payloads while summarizing the challenges
imposed by this architecture
Revolutionizing Future Connectivity: A Contemporary Survey on AI-empowered Satellite-based Non-Terrestrial Networks in 6G
Non-Terrestrial Networks (NTN) are expected to be a critical component of 6th
Generation (6G) networks, providing ubiquitous, continuous, and scalable
services. Satellites emerge as the primary enabler for NTN, leveraging their
extensive coverage, stable orbits, scalability, and adherence to international
regulations. However, satellite-based NTN presents unique challenges, including
long propagation delay, high Doppler shift, frequent handovers, spectrum
sharing complexities, and intricate beam and resource allocation, among others.
The integration of NTNs into existing terrestrial networks in 6G introduces a
range of novel challenges, including task offloading, network routing, network
slicing, and many more. To tackle all these obstacles, this paper proposes
Artificial Intelligence (AI) as a promising solution, harnessing its ability to
capture intricate correlations among diverse network parameters. We begin by
providing a comprehensive background on NTN and AI, highlighting the potential
of AI techniques in addressing various NTN challenges. Next, we present an
overview of existing works, emphasizing AI as an enabling tool for
satellite-based NTN, and explore potential research directions. Furthermore, we
discuss ongoing research efforts that aim to enable AI in satellite-based NTN
through software-defined implementations, while also discussing the associated
challenges. Finally, we conclude by providing insights and recommendations for
enabling AI-driven satellite-based NTN in future 6G networks.Comment: 40 pages, 19 Figure, 10 Tables, Surve
Energy-Efficient On-Board Radio Resource Management for Satellite Communications via Neuromorphic Computing
The latest satellite communication (SatCom) missions are characterized by a
fully reconfigurable on-board software-defined payload, capable of adapting
radio resources to the temporal and spatial variations of the system traffic.
As pure optimization-based solutions have shown to be computationally tedious
and to lack flexibility, machine learning (ML)-based methods have emerged as
promising alternatives. We investigate the application of energy-efficient
brain-inspired ML models for on-board radio resource management. Apart from
software simulation, we report extensive experimental results leveraging the
recently released Intel Loihi 2 chip. To benchmark the performance of the
proposed model, we implement conventional convolutional neural networks (CNN)
on a Xilinx Versal VCK5000, and provide a detailed comparison of accuracy,
precision, recall, and energy efficiency for different traffic demands. Most
notably, for relevant workloads, spiking neural networks (SNNs) implemented on
Loihi 2 yield higher accuracy, while reducing power consumption by more than
100 as compared to the CNN-based reference platform. Our findings point
to the significant potential of neuromorphic computing and SNNs in supporting
on-board SatCom operations, paving the way for enhanced efficiency and
sustainability in future SatCom systems.Comment: currently under review at IEEE Transactions on Machine Learning in
Communications and Networkin
Machine Learning for Radio Resource Management in Multibeam GEO Satellite Systems
Satellite communications (SatComs) systems are facing a massive increase in traffic demand. However, this increase is not uniform across the service area due to the uneven distribution of users and changes in traffic demand diurnal. This problem is addressed by using flexible payload architectures, which allow payload resources to be flexibly allocated to meet the traffic demand of each beam. While optimization-based radio resource management (RRM) has shown significant performance gains, its intense computational complexity limits its practical implementation in real systems. In this paper, we discuss the architecture, implementation and applications of Machine Learning (ML) for resource management in multibeam GEO satellite systems. We mainly focus on two systems, one with power, bandwidth, and/or beamwidth flexibility, and the second with time flexibility, i.e., beam hopping. We analyze and compare different ML techniques that have been proposed for these architectures, emphasizing the use of Supervised Learning (SL) and Reinforcement Learning (RL). To this end, we define whether training should be conducted online or offline based on the characteristics and requirements of each proposed ML technique and discuss the most appropriate system architecture and the advantages and disadvantages of each approach
Cooperative Multi-Agent Deep Reinforcement Learning for Resource Management in Full Flexible VHTS Systems
Very high throughput satellite (VHTS) systems are expected to have a huge increase in traffic demand in the near future. Nevertheless, this increase will not be uniform over the entire service area due to the non-uniform distribution of users and changes in traffic demand during the day. This problem is addressed by using flexible payload architectures, which allow the allocation of payload resources flexibly to meet the traffic demand of each beam, leading to dynamic resource management (DRM) approaches. However, DRM adds significant complexity to VHTS systems, so in this paper we discuss the use of one reinforcement learning (RL) algorithm and two deep reinforcement learning (DRL) algorithms to manage the resources available in flexible payload architectures for DRM. These algorithms are Q-Learning (QL), Deep Q-Learning (DQL) and Double Deep Q-Learning (DDQL) which are compared based on their performance, complexity and added latency. On the other hand, this work demonstrates the superiority a cooperative multiagent (CMA) decentralized distribution has over a single agent (SA)
When Virtual Reality Meets Rate Splitting Multiple Access: A Joint Communication and Computation Approach
Rate Splitting Multiple Access (RSMA) has emerged as an effective
interference management scheme for applications that require high data rates.
Although RSMA has shown advantages in rate enhancement and spectral efficiency,
it has yet not to be ready for latency-sensitive applications such as virtual
reality streaming, which is an essential building block of future 6G networks.
Unlike conventional High-Definition streaming applications, streaming virtual
reality applications requires not only stringent latency requirements but also
the computation capability of the transmitter to quickly respond to dynamic
users' demands. Thus, conventional RSMA approaches usually fail to address the
challenges caused by computational demands at the transmitter, let alone the
dynamic nature of the virtual reality streaming applications. To overcome the
aforementioned challenges, we first formulate the virtual reality streaming
problem assisted by RSMA as a joint communication and computation optimization
problem. A novel multicast approach is then proposed to cluster users into
different groups based on a Field-of-View metric and transmit multicast streams
in a hierarchical manner. After that, we propose a deep reinforcement learning
approach to obtain the solution for the optimization problem. Extensive
simulations show that our framework can achieve the millisecond-latency
requirement, which is much lower than other baseline schemes
Convolutional Neural Networks for Flexible Payload Management in VHTS Systems
Very high throughput satellite (VHTS) systems are expected to have a large increase in traffic demand in the near future. However, this increase will not be uniform throughout the service area due to the nonuniform user distribution, and the changing traffic demand during the day. This problem is addressed using flexible payload architectures, enabling the allocation of the payload resources in a flexible manner to meet traffic demand of each beam, leading to dynamic resource management (DRM) approaches. However, DRM adds significant complexity to the VHTS systems, which is why in this article, we are analyzing the use of convolutional neural networks (CNNs) to manage the resources available in flexible payload architectures for DRM. The VHTS system model is first outlined, for introducing the DRM problem statement and the CNN-based solution. A comparison between different payload architectures is performed in terms of DRM response, and the CNN algorithm performance is compared by three other algorithms, previously suggested in the literature to demonstrate the effectiveness of the suggested approach and to examine all the challenges involved
- …