44 research outputs found
Spatio-Temporal Motifs for Optimized Vehicle-to-Vehicle (V2V) Communications
Caching popular contents in vehicle-to-vehicle (V2V) communication networks
is expected to play an important role in road traffic management, the
realization of intelligent transportation systems (ITSs), and the delivery of
multimedia content across vehicles. However, for effective caching, the network
must dynamically choose the optimal set of cars that will cache popular content
and disseminate it in the entire network. However, most of the existing prior
art on V2V caching is restricted to cache placement that is solely based on
location and user demands and does not account for the large-scale
spatio-temporal variations in V2V communication networks. In contrast, in this
paper, a novel spatio-temporal caching strategy is proposed based on the notion
of temporal graph motifs that can capture spatio-temporal communication
patterns in V2V networks. It is shown that, by identifying such V2V motifs, the
network can find sub-optimal content placement strategies for effective content
dissemination across a vehicular network. Simulation results using real traces
from the city of Cologne show that the proposed approach can increase the
average data rate by for different network scenarios
Matching Theory for Backhaul Management in Small Cell Networks with mmWave Capabilities
Designing cost-effective and scalable backhaul solutions is one of the main
challenges for emerging wireless small cell networks (SCNs). In this regard,
millimeter wave (mmW) communication technologies have recently emerged as an
attractive solution to realize the vision of a high-speed and reliable wireless
small cell backhaul network (SCBN). In this paper, a novel approach is proposed
for managing the spectral resources of a heterogeneous SCBN that can exploit
simultaneously mmW and conventional frequency bands via carrier aggregation. In
particular, a new SCBN model is proposed in which small cell base stations
(SCBSs) equipped with broadband fiber backhaul allocate their frequency
resources to SCBSs with wireless backhaul, by using aggregated bands. One
unique feature of the studied model is that it jointly accounts for both
wireless channel characteristics and economic factors during resource
allocation. The problem is then formulated as a one-to-many matching game and a
distributed algorithm is proposed to find a stable outcome of the game. The
convergence of the algorithm is proven and the properties of the resulting
matching are studied. Simulation results show that under the constraints of
wireless backhauling, the proposed approach achieves substantial performance
gains, reaching up to compared to a conventional best-effort approach.Comment: In Proc. of the IEEE International Conference on Communications
(ICC), Mobile and Wireless Networks Symposium, London, UK, June 201
Matching theory for priority-based cell association in the downlink of wireless small cell networks
The deployment of small cells, overlaid on existing cellular infrastructure,
is seen as a key feature in next-generation cellular systems. In this paper,
the problem of user association in the downlink of small cell networks (SCNs)
is considered. The problem is formulated as a many-to-one matching game in
which the users and SCBSs rank one another based on utility functions that
account for both the achievable performance, in terms of rate and fairness to
cell edge users, as captured by newly proposed priorities. To solve this game,
a novel distributed algorithm that can reach a stable matching is proposed.
Simulation results show that the proposed approach yields an average utility
gain of up to 65% compared to a common association algorithm that is based on
received signal strength. Compared to the classical deferred acceptance
algorithm, the results also show a 40% utility gain and a more fair utility
distribution among the users.Comment: 5 page
Reliability-Optimized User Admission Control for URLLC Traffic: A Neural Contextual Bandit Approach
Ultra-reliable low-latency communication (URLLC) is the cornerstone for a
broad range of emerging services in next-generation wireless networks. URLLC
fundamentally relies on the network's ability to proactively determine whether
sufficient resources are available to support the URLLC traffic, and thus,
prevent so-called cell overloads. Nonetheless, achieving accurate
quality-of-service (QoS) predictions for URLLC user equipment (UEs) and
preventing cell overloads are very challenging tasks. This is due to dependency
of the QoS metrics (latency and reliability) on traffic and channel statistics,
users' mobility, and interdependent performance across UEs. In this paper, a
new QoS-aware UE admission control approach is developed to proactively
estimate QoS for URLLC UEs, prior to associating them with a cell, and
accordingly, admit only a subset of UEs that do not lead to a cell overload. To
this end, an optimization problem is formulated to find an efficient UE
admission control policy, cognizant of UEs' QoS requirements and cell-level
load dynamics. To solve this problem, a new machine learning based method is
proposed that builds on (deep) neural contextual bandits, a suitable framework
for dealing with nonlinear bandit problems. In fact, the UE admission
controller is treated as a bandit agent that observes a set of network
measurements (context) and makes admission control decisions based on
context-dependent QoS (reward) predictions. The simulation results show that
the proposed scheme can achieve near-optimal performance and yield substantial
gains in terms of cell-level service reliability and efficient resource
utilization.Comment: To be published in the proceedings of the 2024 IEEE International
Conference on Machine Learning for Communication and Networking (ICMLCN
Evolutionary Deep Reinforcement Learning for Dynamic Slice Management in O-RAN
The next-generation wireless networks are required to satisfy a variety of
services and criteria concurrently. To address upcoming strict criteria, a new
open radio access network (O-RAN) with distinguishing features such as flexible
design, disaggregated virtual and programmable components, and intelligent
closed-loop control was developed. O-RAN slicing is being investigated as a
critical strategy for ensuring network quality of service (QoS) in the face of
changing circumstances. However, distinct network slices must be dynamically
controlled to avoid service level agreement (SLA) variation caused by rapid
changes in the environment. Therefore, this paper introduces a novel framework
able to manage the network slices through provisioned resources intelligently.
Due to diverse heterogeneous environments, intelligent machine learning
approaches require sufficient exploration to handle the harshest situations in
a wireless network and accelerate convergence. To solve this problem, a new
solution is proposed based on evolutionary-based deep reinforcement learning
(EDRL) to accelerate and optimize the slice management learning process in the
radio access network's (RAN) intelligent controller (RIC) modules. To this end,
the O-RAN slicing is represented as a Markov decision process (MDP) which is
then solved optimally for resource allocation to meet service demand using the
EDRL approach. In terms of reaching service demands, simulation results show
that the proposed approach outperforms the DRL baseline by 62.2%.Comment: This paper has been accepted for the 2022 IEEE Globecom Workshops (GC
Wkshps