116 research outputs found
Power Saving Techniques in 5G Technology for Multiple-Beam Communications
The evolution of mobile technology and computation systems enables User Equipment (UE) to manage tremendous amounts of data transmission. As a result of current 5G technology, several types of wireless traffic in millimeter wave bands can be transmitted at high data rates with ultra-reliable and small latency communications. The 5G networks rely on directional beamforming and mmWave uses to overcome propagation and losses during penetration. To align the best beam pairs and achieve high data rates, beam-search operations are used in 5G. This combined with multibeam reception and high-order modulation techniques deteriorates the battery power of the UE. In the previous 4G radio mobile system, Discontinuous Reception (DRX) techniques were successfully used to save energy. To reduce the energy consumption and latency of multiple-beam 5G radio communications, we will propose in this paper the DRX Beam Measurement technique (DRX-BM). Based on the power-saving factor analysis and the delayed response, we will model DRX-BM into a semi-Markov process to reduce the tracking time. Simulations in MATLAB are used to assess the effectiveness of the proposed model and avoid unnecessary time spent on beam search. Furthermore, the simulation indicates that our proposed technique makes an improvement and saves 14% on energy with a minimum delay
Non-stationary service curves : model and estimation method with application to cellular sleep scheduling
In today’s computer networks, short-lived flows are predominant. Consequently,
transient start-up effects such as the connection establishment in
cellular networks have a significant impact on the performance. Although
various solutions are derived in the fields of queuing theory, available bandwidths,
and network calculus, the focus is, e.g., about the mean wake-up
times, estimates of the available bandwidth, which consist either out of a
single value or a stationary function and steady-state solutions for backlog
and delay. Contrary, the analysis during transient phases presents fundamental
challenges that have only been partially solved and is therefore
understood to a much lesser extent.
To better comprehend systems with transient characteristics and to explain
their behavior, this thesis contributes a concept of non-stationary
service curves that belong to the framework of stochastic network calculus.
Thereby, we derive models of sleep scheduling including time-variant
performance bounds for backlog and delay. We investigate the impact of
arrival rates and different duration of wake-up times, where the metrics
of interest are the transient overshoot and relaxation time. We compare
a time-variant and a time-invariant description of the service with an
exact solution. To avoid probabilistic and maybe unpredictable effects from
random services, we first choose a deterministic description of the service
and present results that illustrate that only the time-variant service curve can
follow the progression of the exact solution. In contrast, the time-invariant
service curve remains in the worst-case value.
Since in real cellular networks, it is well known that the service and sleep
scheduling procedure is random, we extend the theory to the stochastic
case and derive a model with a non-stationary service curve based on
regenerative processes.
Further, the estimation of cellular network’s capacity/ available bandwidth
from measurements is an important topic that attracts research, and
several works exist that obtain an estimate from measurements. Assuming
a system without any knowledge about its internals, we investigate
existing measurement methods such as the prevalent rate scanning and
the burst response method. We find fundamental limitations to estimate
the service accurately in a time-variant way, which can be explained by
the non-convexity of transient services and their super-additive network
processes.
In order to overcome these limitations, we derive a novel two-phase probing
technique. In the first step, the shape of a minimal probe is identified,
which we then use to obtain an accurate estimate of the unknown service.
To demonstrate the minimal probing method’s applicability, we perform
a comprehensive measurement campaign in cellular networks with sleep
scheduling (2G, 3G, and 4G). Here, we observe significant transient backlogs
and delay overshoots that persist for long relaxation times by sending
constant-bit-rate traffic, which matches the findings from our theoretical
model. Contrary, the minimal probing method shows another strength:
sending the minimal probe eliminates the transient overshoots and relaxation
times
Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions
The ever-increasing number of resource-constrained
Machine-Type Communication (MTC) devices is leading to the
critical challenge of fulfilling diverse communication requirements
in dynamic and ultra-dense wireless environments. Among
different application scenarios that the upcoming 5G and beyond
cellular networks are expected to support, such as enhanced Mobile
Broadband (eMBB), massive Machine Type Communications
(mMTC) and Ultra-Reliable and Low Latency Communications
(URLLC), the mMTC brings the unique technical challenge of
supporting a huge number of MTC devices in cellular networks,
which is the main focus of this paper. The related challenges
include Quality of Service (QoS) provisioning, handling highly
dynamic and sporadic MTC traffic, huge signalling overhead and
Radio Access Network (RAN) congestion. In this regard, this
paper aims to identify and analyze the involved technical issues,
to review recent advances, to highlight potential solutions and to
propose new research directions. First, starting with an overview
of mMTC features and QoS provisioning issues, we present
the key enablers for mMTC in cellular networks. Along with
the highlights on the inefficiency of the legacy Random Access
(RA) procedure in the mMTC scenario, we then present the key
features and channel access mechanisms in the emerging cellular
IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT).
Subsequently, we present a framework for the performance
analysis of transmission scheduling with the QoS support along
with the issues involved in short data packet transmission. Next,
we provide a detailed overview of the existing and emerging
solutions towards addressing RAN congestion problem, and then
identify potential advantages, challenges and use cases for the
applications of emerging Machine Learning (ML) techniques in
ultra-dense cellular networks. Out of several ML techniques, we
focus on the application of low-complexity Q-learning approach
in the mMTC scenario along with the recent advances towards
enhancing its learning performance and convergence. Finally,
we discuss some open research challenges and promising future
research directions
Adaptive scheduling in cellular access, wireless mesh and IP networks
Networking scenarios in the future will be complex and will include fixed networks and hybrid Fourth Generation (4G) networks, consisting of both infrastructure-based and infrastructureless, wireless parts. In such scenarios, adaptive provisioning and management of network resources becomes of critical importance. Adaptive mechanisms are desirable since they enable a self-configurable network that is able to adjust itself to varying traffic and channel conditions. The operation of adaptive mechanisms is heavily based on measurements. The aim of this thesis is to investigate how measurement based, adaptive packet scheduling algorithms can be utilized in different networking environments.
The first part of this thesis is a proposal for a new delay-based scheduling algorithm, known as Delay-Bounded Hybrid Proportional Delay (DBHPD), for delay adaptive provisioning in DiffServ-based fixed IP networks. This DBHPD algorithm is thoroughly evaluated by ns2-simulations and measurements in a FreeBSD prototype router network. It is shown that DBHPD results in considerably more controllable differentiation than basic static bandwidth sharing algorithms. The prototype router measurements also prove that a DBHPD algorithm can be easily implemented in practice, causing less processing overheads than a well known CBQ algorithm.
The second part of this thesis discusses specific scheduling requirements set by hybrid 4G networking scenarios. Firstly, methods for joint scheduling and transmit beamforming in 3.9G or 4G networks are described and quantitatively analyzed using statistical methods. The analysis reveals that the combined gain of channel-adaptive scheduling and transmit beamforming is substantial and that an On-off strategy can achieve the performance of an ideal Max SNR strategy if the feedback threshold is optimized. Finally, a novel cross-layer energy-adaptive scheduling and queue management framework EAED (Energy Aware Early Detection), for preserving delay bounds and minimizing energy consumption in WLAN mesh networks, is proposed and evaluated with simulations. The simulations show that our scheme can save considerable amounts of transmission energy without violating application level QoS requirements when traffic load and distances are reasonable
- …