26 research outputs found
Collision Avoidance Resource Allocation for LoRaWAN
Data Availability Statement: The data presented in this study are available on request from the
corresponding author.Funding: This research was partially funded by the Andalusian Knowledge Agency (project A-TIC-
241-UGR18), the Spanish Ministry of Economy and Competitiveness (project TEC2016-76795-C6-4-R)
and the H2020 research and innovation project 5G-CLARITY (Grant No. 871428).The number of connected IoT devices is significantly increasing and it is expected to reach
more than two dozens of billions of IoT connections in the coming years. Low Power Wide Area
Networks (LPWAN) have become very relevant for this new paradigm due to features such as
large coverage and low power consumption. One of the most appealing technologies among these
networks is LoRaWAN. Although it may be considered as one of the most mature LPWAN platforms,
there are still open gaps such as its capacity limitations. For this reason, this work proposes a
collision avoidance resource allocation algorithm named the Collision Avoidance Resource Allocation
(CARA) algorithm with the objective of significantly increase system capacity. CARA leverages the
multichannel structure and the orthogonality of spreading factors in LoRaWAN networks to avoid
collisions among devices. Simulation results show that, assuming ideal radio link conditions, our
proposal outperforms in 95.2% the capacity of a standard LoRaWAN network and increases the
capacity by almost 40% assuming a realistic propagation model. In addition, it has been verified
that CARA devices can coexist with LoRaWAN traditional devices, thus allowing the simultaneous
transmissions of both types of devices. Moreover, a proof-of-concept has been implemented using
commercial equipment in order to check the feasibility and the correct operation of our solution.Andalusian Knowledge Agency
A-TIC-241-UGR18Spanish Ministry of Economy and Competitiveness
TEC2016-76795-C6-4-RH2020 research and innovation project 5G-CLARITY
87142
Sharing gNB components in RAN slicing: A perspective from 3GPP/NFV standards
To implement the next Generation NodeBs (gNBs)
that are present in every Radio Access Network (RAN) slice
subnet, Network Function Virtualization (NFV) enables the
deployment of some of the gNB components as Virtual Networks
Functions (VNFs). Deploying individual VNF instances for these
components could guarantee the customization of each RAN slice
subnet. However, due to the multiplicity of VNFs, the required
amount of virtual resources will be greater compared to the case
where a single VNF instance carries the aggregated traffic of all
the RAN slice subnets. Sharing gNB components between RAN
slice subnets could optimize the trade-off between customization,
isolation and resource utilization. In this article, we shed light
on the key aspects in the Third Generation Partnership Project
(3GPP)/NFV standards for sharing gNB components. First, we
identify four possible scenarios for sharing gNB components.
Then, we analyze the impact of sharing on the customization
level of each RAN slice subnet. Later, we determine the main
factors that enable isolation between RAN slice subnets. Finally,
we propose a 3GPP/NFV-based description model to define the
lifecycle management of shared gNB componentsThis work is partially supported by the Spanish Ministry
of Economy and Competitiveness and the European Regional Development
Fund (Project TEC2016-76795-C6-4-R)Spanish Ministry of Education,
Culture and Sport (FPU Grant 17/01844)Andalusian Knowledge
Agency (project ATIC-241-UGR18)
Analytical Model for the UE Blocking Probability in an OFDMA Cell providing GBR Slices
This work is partially supported by the H2020 research and innovation
project 5G-CLARITY (Grant No. 871428); the Spanish Ministry of Economy
and Competitiveness, the European Regional Development Fund (Project
PID2019-108713RB-C53); and the Spanish Ministry of Education, Culture
and Sport (FPU Grant 17/01844).When a network operator designs strategies for planning and
operating Guaranteed Bit Rate (GBR) slices, there are inherent issues
such as the under(over)-provisioning of radio resources. To avoid them,
modeling the User Equipment (UE) blocking probability in each cell is
key. This task is challenging due to the total required bandwidth
depends on the channel quality of each UE and the spatio-temporal
variations in the number of UE sessions. Under this context, we propose
an analytical model to evaluate the UE blocking probability in an
Orthogonal Frequency Division Multiple Access (OFDMA) cell. The main
novelty of our model is the adoption of a multi-dimensional Erlang-B
system which meets the reversibility property. This means our model is
insensitive to the holding time distribution for the UE session. In
addition, this property reduces the computational complexity of our
model due to the solution for the state transition probabilities has
product form. The provided results show that our model exhibits an
estimation error for the UE blocking probability below 3.5%.This work is partially supported by the H2020 research and
innovation project 5G-CLARITY (Grant No. 871428)Spanish Ministry
of Economy and Competitiveness, the European Regional Development Fund
(Project PID2019-108713RB-C53)Spanish Ministry of Education,
Culture and Sport (FPU Grant 17/01844
Analytical Modeling and Experimental Validation of NB-IoT Device Energy Consumption
The recent standardization of 3GPP Narrowband
Internet of Things (NB-IoT) paves the way to support low-power
wide-area (LPWA) use cases in cellular networks. NB-IoT design
goals are extended coverage, low power and low cost devices,
and massive connections. As a new radio access technology, it is
necessary to analyze the possibilities NB-IoT provides to support
different traffic and coverage needs. In this paper, we propose and
validate an NB-IoT energy consumption model. The analytical
model is based on a Markov chain. For the validation, an experimental
setup is used to measure the energy consumption of two
commercial NB-IoT user equipments (UEs) connected to a base
station emulator. The evaluation is done considering three test
cases. The comparison of the model and measurements is done
in terms of the estimated battery lifetime and the latency needed
to finish the control plane procedure. The conducted evaluation
shows the analytical model performs well, obtaining a maximum
relative error of the battery lifetime estimation between the model
and the measurements of 21% for an assumed interarrival time
(IAT) of 6 min.This
work was supported in part by the Spanish Ministry of Economy and
Competitiveness and the European Regional Development Fund under
Project TEC2016-76795-C6-4-R and in part by the H2020 European Project
TRIANGLE under Grant 688712
Backhaul-Aware Dimensioning and Planning of Millimeter-Wave Small Cell Networks
The massive deployment of Small Cells (SCs) is increasingly being adopted by mobile
operators to face the exponentially growing traffic demand. Using the millimeter-wave (mmWave)
band in the access and backhaul networks will be key to provide the capacity that meets such demand.
However, dimensioning and planning have become complex tasks, because the capacity requirements
for mmWave links can significantly vary with the SC location. In this work, we address the problem
of SC planning considering the backhaul constraints, assuming that a line-of-sight (LOS) between
the nodes is required to reliably support the traffic demand. Such a LOS condition reduces the set
of potential site locations. Simulation results show that, under certain conditions, the proposed
algorithm is effective in finding solutions and strongly efficient in computational cost when compared
to exhaustive search approaches.H2020 research and innovation project 5G-CLARITY
871428Spanish Ministry of Science, Innovation and Universities
PID2019-108713RB-C5
Dynamic Resource Provisioning of a Scalable E2E Network Slicing Orchestration System
Network slicing allows different applications and
network services to be deployed on virtualized resources running
on a common underlying physical infrastructure. Developing
a scalable system for the orchestration of end-to-end (E2E)
mobile network slices requires careful planning and very reliable
algorithms. In this paper, we propose a novel E2E Network
Slicing Orchestration System (NSOS) and a Dynamic Auto-
Scaling Algorithm (DASA) for it. Our NSOS relies strongly on
the foundation of a hierarchical architecture that incorporates
dedicated entities per domain to manage every segment of the
mobile network from the access, to the transport and core
network part for a scalable orchestration of federated network
slices. The DASA enables the NSOS to autonomously adapt
its resources to changes in the demand for slice orchestration
requests (SORs) while enforcing a given mean overall time taken
by the NSOS to process any SOR. The proposed DASA includes
both proactive and reactive resource provisioning techniques).
The proposed resource dimensioning heuristic algorithm of the
DASA is based on a queuing model for the NSOS, which consists
of an open network of G/G/m queues. Finally, we validate the
proper operation and evaluate the performance of our DASA
solution for the NSOS by means of system-level simulations.This research work is partially supported by the European
Union’s Horizon 2020 research and innovation program under
the 5G!Pagoda project, the MATILDA project and the
Academy of Finland 6Genesis project with grant agreement
No. 723172, No. 761898 and No. 318927, respectively. It was
also partially funded by the Academy of Finland Project CSN
- under Grant Agreement 311654 and the Spanish Ministry of
Education, Culture and Sport (FPU Grant 13/04833), and the
Spanish Ministry of Economy and Competitiveness and the
European Regional Development Fund (TEC2016-76795-C6-
4-R)
On the Rollout of Network Slicing in Carrier Networks: A Technology Radar
Network slicing is a powerful paradigm for network operators to support use cases with
widely diverse requirements atop a common infrastructure. As 5G standards are completed, and
commercial solutions mature, operators need to start thinking about how to integrate network slicing
capabilities in their assets, so that customer-facing solutions can be made available in their portfolio.
This integration is, however, not an easy task, due to the heterogeneity of assets that typically exist
in carrier networks. In this regard, 5G commercial networks may consist of a number of domains,
each with a different technological pace, and built out of products from multiple vendors, including
legacy network devices and functions. These multi-technology, multi-vendor and brownfield features
constitute a challenge for the operator, which is required to deploy and operate slices across all these
domains in order to satisfy the end-to-end nature of the services hosted by these slices. In this context,
the only realistic option for operators is to introduce slicing capabilities progressively, following a
phased approach in their roll-out. The purpose of this paper is to precisely help designing this kind
of plan, by means of a technology radar. The radar identifies a set of solutions enabling network
slicing on the individual domains, and classifies these solutions into four rings, each corresponding
to a different timeline: (i) as-is ring, covering today’s slicing solutions; (ii) deploy ring, corresponding
to solutions available in the short term; (iii) test ring, considering medium-term solutions; and
(iv) explore ring, with solutions expected in the long run. This classification is done based on the
technical availability of the solutions, together with the foreseen market demands. The value of this
radar lies in its ability to provide a complete view of the slicing landscape with one single snapshot,
by linking solutions to information that operators may use for decision making in their individual
go-to-market strategies.H2020 European Projects 5G-VINNI (grant agreement No. 815279) and 5G-CLARITY (grant agreement No. 871428)Spanish national project TRUE-5G (PID2019-108713RB-C53
Asynchronous Time-Sensitive Networking for Industrial Networks
Time-Sensitive Networking (TSN) is expected to be a
cornerstone in tomorrow’s industrial networks. That is because of
its ability to provide deterministic quality-of-service in terms of
delay, jitter, and scalability. Moreover, it enables more scalable,
more affordable, and easier to manage and operate networks
compared to current industrial networks, which are based on
Industrial Ethernet. In this article, we evaluate the maximum
capacity of the asynchronous TSN networks to accommodate
industrial traffic flows. To that end, we formally formulate the
flow allocation problem in the mentioned networks as a convex
mixed-integer non-linear program. To the best of the authors’
knowledge, neither the maximum utilization of the asynchronous
TSN networks nor the formulation of the flow allocation problem
in those networks have been previously addressed in the literature.
The results show that the network topology and the traffic matrix
highly impact on the link utilization.This work has been partially funded by the H2020 research
and innovation project 5G-CLARITY (Grant No. 871428), national
research project TRUE5G: PID2019-108713RB-C5
A Survey on 5G Usage Scenarios and Traffic Models
The fifth-generation mobile initiative, 5G, is a
tremendous and collective effort to specify, standardize, design,
manufacture, and deploy the next cellular network generation.
5G networks will support demanding services such as enhanced
Mobile Broadband, Ultra-Reliable and Low Latency Communications and massive Machine-Type Communications, which will
require data rates of tens of Gbps, latencies of few milliseconds
and connection densities of millions of devices per square kilometer. This survey presents the most significant use cases expected
for 5G including their corresponding scenarios and traffic models.
First, the paper analyzes the characteristics and requirements for
5G communications, considering aspects such as traffic volume,
network deployments, and main performance targets. Secondly,
emphasizing the definition of performance evaluation criteria
for 5G technologies, the paper reviews related proposals from
principal standards development organizations and industry
alliances. Finally, well-defined and significant 5G use cases are
provided. As a result, these guidelines will help and ease the
performance evaluation of current and future 5G innovations, as
well as the dimensioning of 5G future deployments.This work is partially funded by the Spanish Ministry of
Economy and Competitiveness (project TEC2016-76795-C6-4-R)H2020
research and innovation project 5G-CLARITY (Grant No. 871428)Andalusian Knowledge Agency (project A-TIC-241-UGR18)
5G Infrastructure Network Slicing: E2E Mean Delay Model and Effectiveness Assessment to Reduce Downtimes in Industry 4.0
This work has been partially funded by the H2020 project 5G-CLARITY (Grant No. 871428) and the Spanish national project TRUE-5G (PID2019-108713RB-C53).Fifth Generation (5G) is expected to meet stringent performance network requisites of
the Industry 4.0. Moreover, its built-in network slicing capabilities allow for the support of the
traffic heterogeneity in Industry 4.0 over the same physical network infrastructure. However, 5G
network slicing capabilities might not be enough in terms of degree of isolation for many private
5G networks use cases, such as multi-tenancy in Industry 4.0. In this vein, infrastructure network
slicing, which refers to the use of dedicated and well isolated resources for each network slice at every
network domain, fits the necessities of those use cases. In this article, we evaluate the effectiveness of
infrastructure slicing to provide isolation among production lines (PLs) in an industrial private 5G
network. To that end, we develop a queuing theory-based model to estimate the end-to-end (E2E)
mean packet delay of the infrastructure slices. Then, we use this model to compare the E2E mean
delay for two configurations, i.e., dedicated infrastructure slices with segregated resources for each
PL against the use of a single shared infrastructure slice to serve the performance-sensitive traffic
from PLs. Also we evaluate the use of Time-Sensitive Networking (TSN) against bare Ethernet to
provide layer 2 connectivity among the 5G system components. We use a complete and realistic
setup based on experimental and simulation data of the scenario considered. Our results support the
effectiveness of infrastructure slicing to provide isolation in performance among the different slices.
Then, using dedicated slices with segregated resources for each PL might reduce the number of the
production downtimes and associated costs as the malfunctioning of a PL will not affect the network
performance perceived by the performance-sensitive traffic from other PLs. Last, our results show
that, besides the improvement in performance, TSN technology truly provides full isolation in the
transport network compared to standard Ethernet thanks to traffic prioritization, traffic regulation,
and bandwidth reservation capabilities.H2020 project 5G-CLARITY 871428Spanish Government PID2019-108713RB-C53TRUE-5