107 research outputs found
Enhanced Machine Learning Techniques for Early HARQ Feedback Prediction in 5G
We investigate Early Hybrid Automatic Repeat reQuest (E-HARQ) feedback
schemes enhanced by machine learning techniques as a path towards
ultra-reliable and low-latency communication (URLLC). To this end, we propose
machine learning methods to predict the outcome of the decoding process ahead
of the end of the transmission. We discuss different input features and
classification algorithms ranging from traditional methods to newly developed
supervised autoencoders. These methods are evaluated based on their prospects
of complying with the URLLC requirements of effective block error rates below
at small latency overheads. We provide realistic performance
estimates in a system model incorporating scheduling effects to demonstrate the
feasibility of E-HARQ across different signal-to-noise ratios, subcode lengths,
channel conditions and system loads, and show the benefit over regular HARQ and
existing E-HARQ schemes without machine learning.Comment: 14 pages, 15 figures; accepted versio
Design, implementation and experimental evaluation of a network-slicing aware mobile protocol stack
MenciĂłn Internacional en el tĂtulo de doctorWith the arrival of new generation mobile networks, we currently observe a paradigm
shift, where monolithic network functions running on dedicated hardware are now
implemented as software pieces that can be virtualized on general purpose hardware
platforms. This paradigm shift stands on the softwarization of network functions and
the adoption of virtualization techniques. Network Function Virtualization (NFV)
comprises softwarization of network elements and virtualization of these components.
It brings multiple advantages: (i) Flexibility, allowing an easy management of the virtual
network functions (VNFs) (deploy, start, stop or update); (ii) efficiency, resources can be
adequately consumed due to the increased flexibility of the network infrastructure; and
(iii) reduced costs, due to the ability of sharing hardware resources. To this end, multiple
challenges must be addressed to effectively leverage of all these benefits.
Network Function Virtualization envisioned the concept of virtual network, resulting in
a key enabler of 5G networks flexibility, Network Slicing. This new paradigm represents
a new way to operate mobile networks where the underlying infrastructure is "sliced"
into logically separated networks that can be customized to the specific needs of the
tenant. This approach also enables the ability of instantiate VNFs at different locations
of the infrastructure, choosing their optimal placement based on parameters such as the
requirements of the service traversing the slice or the available resources. This decision
process is called orchestration and involves all the VNFs withing the same network slice.
The orchestrator is the entity in charge of managing network slices. Hands-on experiments
on network slicing are essential to understand its benefits and limits, and to validate the
design and deployment choices. While some network slicing prototypes have been built
for Radio Access Networks (RANs), leveraging on the wide availability of radio hardware
and open-source software, there is no currently open-source suite for end-to-end network
slicing available to the research community. Similarly, orchestration mechanisms must
be evaluated as well to properly validate theoretical solutions addressing diverse aspects
such as resource assignment or service composition.
This thesis contributes on the study of the mobile networks evolution regarding its
softwarization and cloudification. We identify software patterns for network function
virtualization, including the definition of a novel mobile architecture that squeezes the virtualization architecture by splitting functionality in atomic functions.
Then, we effectively design, implement and evaluate of an open-source network
slicing implementation. Our results show a per-slice customization without paying the
price in terms of performance, also providing a slicing implementation to the research
community. Moreover, we propose a framework to flexibly re-orchestrate a virtualized
network, allowing on-the-fly re-orchestration without disrupting ongoing services. This
framework can greatly improve performance under changing conditions. We evaluate
the resulting performance in a realistic network slicing setup, showing the feasibility and
advantages of flexible re-orchestration.
Lastly and following the required re-design of network functions envisioned during
the study of the evolution of mobile networks, we present a novel pipeline architecture
specifically engineered for 4G/5G Physical Layers virtualized over clouds. The proposed
design follows two objectives, resiliency upon unpredictable computing and parallelization
to increase efficiency in multi-core clouds. To this end, we employ techniques such as tight
deadline control, jitter-absorbing buffers, predictive Hybrid Automatic Repeat Request,
and congestion control. Our experimental results show that our cloud-native approach
attains > 95% of the theoretical spectrum efficiency in hostile environments where stateof-
the-art architectures collapse.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en IngenierĂa TelemĂĄtica por la Universidad Carlos III de MadridPresidente: Francisco Valera Pintor.- Secretario: Vincenzo Sciancalepore.- Vocal: Xenofon Fouka
End-to-End Simulation of 5G mmWave Networks
Due to its potential for multi-gigabit and low latency wireless links,
millimeter wave (mmWave) technology is expected to play a central role in 5th
generation cellular systems. While there has been considerable progress in
understanding the mmWave physical layer, innovations will be required at all
layers of the protocol stack, in both the access and the core network.
Discrete-event network simulation is essential for end-to-end, cross-layer
research and development. This paper provides a tutorial on a recently
developed full-stack mmWave module integrated into the widely used open-source
ns--3 simulator. The module includes a number of detailed statistical channel
models as well as the ability to incorporate real measurements or ray-tracing
data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and
highly customizable, making it easy to integrate algorithms or compare
Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example.
The module is interfaced with the core network of the ns--3 Long Term Evolution
(LTE) module for full-stack simulations of end-to-end connectivity, and
advanced architectural features, such as dual-connectivity, are also available.
To facilitate the understanding of the module, and verify its correct
functioning, we provide several examples that show the performance of the
custom mmWave stack as well as custom congestion control algorithms designed
specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and
Tutorials (revised Jan. 2018
Separation Framework: An Enabler for Cooperative and D2D Communication for Future 5G Networks
Soaring capacity and coverage demands dictate that future cellular networks
need to soon migrate towards ultra-dense networks. However, network
densification comes with a host of challenges that include compromised energy
efficiency, complex interference management, cumbersome mobility management,
burdensome signaling overheads and higher backhaul costs. Interestingly, most
of the problems, that beleaguer network densification, stem from legacy
networks' one common feature i.e., tight coupling between the control and data
planes regardless of their degree of heterogeneity and cell density.
Consequently, in wake of 5G, control and data planes separation architecture
(SARC) has recently been conceived as a promising paradigm that has potential
to address most of aforementioned challenges. In this article, we review
various proposals that have been presented in literature so far to enable SARC.
More specifically, we analyze how and to what degree various SARC proposals
address the four main challenges in network densification namely: energy
efficiency, system level capacity maximization, interference management and
mobility management. We then focus on two salient features of future cellular
networks that have not yet been adapted in legacy networks at wide scale and
thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP), and
device-to-device (D2D) communications. After providing necessary background on
CoMP and D2D, we analyze how SARC can particularly act as a major enabler for
CoMP and D2D in context of 5G. This article thus serves as both a tutorial as
well as an up to date survey on SARC, CoMP and D2D. Most importantly, the
article provides an extensive outlook of challenges and opportunities that lie
at the crossroads of these three mutually entangled emerging technologies.Comment: 28 pages, 11 figures, IEEE Communications Surveys & Tutorials 201
The path towards ultra-reliable low-latency communications via HARQ
Ultra-reliable Low-latency Communications (URLLC) is potentially one of the most disruptive communication paradigms offered by the next generation of wireless networks, 5G. This is easily demonstrated by the diverse set of applications it enables, such as autonomous driving; remote surgery; wireless networked control systems; mission-critical machine type communication; and many more. Basically, URLLC consists of the almost 100% guarantee of message delivery within a very short time interval. Furthermore, the pressure from climate change coupled with the massive growth of cellular networks expected to occur in the near future means that URLLC must also be energy efficient. On its own, achieving low-latency with high reliability is already a stringent requirement, but when that is coupled with the need for resource efficiency, it becomes even more challenging. That is the motivation behind this thesis: to study URLLC in the context of resource efficiency. Thus, a study of the counterintuitive use of retransmissions, more specifically Hybrid Automatic Repeat Request (HARQ), in the scenario of URLLC is proposed and carried out. HARQ is very attractive in terms of resource efficiency, and that is the motivation behind using it even when stringent time constraints are imposed. Four contributions are made by the present work. Firstly, a mathematical problem is presented and solved for optimizing the number of allowed retransmission rounds considering HARQ in URLLC, considering both energy efficiency as well as electromagnetic irradiation. This representation relies on a few assumptions in order to be realizable in practical scenarios. Namely, these assumptions are regarding the possibility of early error detection for sending the feedback signals and on not having to consider medium access control introduced delays. Secondly, we consider one important aspect of wireless systems, which is that they can be greatly optimized if they are designed with a specific application in mind. Based on this, a study of the use of HARQ specifically tuned for Networked Control Systems is presented, taking into account the particular characteristics of these applications. Results here show that fine-tuning for the specific characteristics of these applications yields better results when compared to using the results from the previous contribution, which are more application-agnostic. These improved results are possible thanks to the exploitation of application-specific characteristics, more specifically the use of a packetized predictive control strategy jointly designed with the communication protocol. Next, the concept of HARQ for URLLC is extended to a larger scale in an effort to relax the aforementioned assumptions. This is studied within the framework of self-organizing networks and leverages machine learning algorithms in order to overcome those strict assumptions from the first contribution. This is demonstrated by developing a digital twin simulation of the city of Glasgow and generating a large dataset of users in the cellular network, which is a third contribution of this thesis. Then, machine learning (more specifically long short-term convolutional neural networks) is applied for predicting message failures. Lastly, a protocol to exploit such predictions in combination with HARQ to deliver downlink URLLC is applied, resulting in a fourth contribution. In summary, this thesis presents a latency aware HARQ technique which is shown to be very efficient. We show that it uses up as much as 18 times less energy than a frequency diversity strategy and that it can emit more than 10 times less energy electromagnetic field radiation when compared to the same strategy. We also propose joint design techniques, where communication and control parameters are tweaked at the same time, enabling wireless control systems with a three-fold reduction in required bandwidth to achieve URLLC requirements. Lastly, we present a digital twin of the city of Glasgow which enables us to create a prediction algorithm for predicting channel quality with very high accuracyâroot mean square error on the order of 10â2. This ties into the rest of the contributions as it can be used to enable early feedback detection, which in turn can be used to make sure the latency aware protocol can be employed
- âŠ