5,770 research outputs found
Performance analysis of carrier aggregation for various mobile network implementations scenario based on spectrum allocated
Carrier Aggregation (CA) is one of the Long Term Evolution Advanced (LTE-A)
features that allow mobile network operators (MNO) to combine multiple
component carriers (CCs) across the available spectrum to create a wider
bandwidth channel for increasing the network data throughput and overall
capacity. CA has a potential to enhance data rates and network performance in
the downlink, uplink, or both, and it can support aggregation of frequency
division duplexing (FDD) as well as time division duplexing (TDD). The
technique enables the MNO to exploit fragmented spectrum allocations and can be
utilized to aggregate licensed and unlicensed carrier spectrum as well. This
paper analyzes the performance gains and complexity level that arises from the
aggregation of three inter-band component carriers (3CC) as compared to the
aggregation of 2CC using a Vienna LTE System Level simulator. The results show
a considerable growth in the average cell throughput when 3CC aggregations are
implemented over the 2CC aggregation, at the expense of reduction in the
fairness index. The reduction in the fairness index implies that, the scheduler
has an increased task in resource allocations due to the added component
carrier. Compensating for such decrease in the fairness index could result into
scheduler design complexity. The proposed scheme can be adopted in combining
various component carriers, to increase the bandwidth and hence the data rates.Comment: 13 page
A Machine Learning based Framework for KPI Maximization in Emerging Networks using Mobility Parameters
Current LTE network is faced with a plethora of Configuration and
Optimization Parameters (COPs), both hard and soft, that are adjusted manually
to manage the network and provide better Quality of Experience (QoE). With 5G
in view, the number of these COPs are expected to reach 2000 per site, making
their manual tuning for finding the optimal combination of these parameters, an
impossible fleet. Alongside these thousands of COPs is the anticipated network
densification in emerging networks which exacerbates the burden of the network
operators in managing and optimizing the network. Hence, we propose a machine
learning-based framework combined with a heuristic technique to discover the
optimal combination of two pertinent COPs used in mobility, Cell Individual
Offset (CIO) and Handover Margin (HOM), that maximizes a specific Key
Performance Indicator (KPI) such as mean Signal to Interference and Noise Ratio
(SINR) of all the connected users. The first part of the framework leverages
the power of machine learning to predict the KPI of interest given several
different combinations of CIO and HOM. The resulting predictions are then fed
into Genetic Algorithm (GA) which searches for the best combination of the two
mentioned parameters that yield the maximum mean SINR for all users.
Performance of the framework is also evaluated using several machine learning
techniques, with CatBoost algorithm yielding the best prediction performance.
Meanwhile, GA is able to reveal the optimal parameter setting combination more
efficiently and with three orders of magnitude faster convergence time in
comparison to brute force approach
Orchestrating Service Migration for Low Power MEC-Enabled IoT Devices
Multi-Access Edge Computing (MEC) is a key enabling technology for Fifth
Generation (5G) mobile networks. MEC facilitates distributed cloud computing
capabilities and information technology service environment for applications
and services at the edges of mobile networks. This architectural modification
serves to reduce congestion, latency, and improve the performance of such edge
colocated applications and devices. In this paper, we demonstrate how reactive
service migration can be orchestrated for low-power MEC-enabled Internet of
Things (IoT) devices. Here, we use open-source Kubernetes as container
orchestration system. Our demo is based on traditional client-server system
from user equipment (UE) over Long Term Evolution (LTE) to the MEC server. As
the use case scenario, we post-process live video received over web real-time
communication (WebRTC). Next, we integrate orchestration by Kubernetes with S1
handovers, demonstrating MEC-based software defined network (SDN). Now, edge
applications may reactively follow the UE within the radio access network
(RAN), expediting low-latency. The collected data is used to analyze the
benefits of the low-power MEC-enabled IoT device scheme, in which end-to-end
(E2E) latency and power requirements of the UE are improved. We further discuss
the challenges of implementing such schemes and future research directions
therein
Self organization of tilts in relay enhanced networks: a distributed solution
Despite years of physical-layer research, the capacity enhancement potential of relays is limited by the additional spectrum required for Base Station (BS)-Relay Station (RS) links. This paper presents a novel distributed solution by exploiting a system level perspective instead. Building on a realistic system model with impromptu RS deployments, we develop an analytical framework for tilt optimization that can dynamically maximize spectral efficiency of both the BS-RS and BS-user links in an online manner. To obtain a distributed self-organizing solution, the large scale system-wide optimization problem is decomposed into local small scale subproblems by applying the design principles of self-organization in biological systems. The local subproblems are non-convex, but having a very small scale, can be solved via standard nonlinear optimization techniques such as sequential quadratic programming. The performance of the developed solution is evaluated through extensive simulations for an LTE-A type system and compared against a number of benchmarks including a centralized solution obtained via brute force, that also gives an upper bound to assess the optimality gap. Results show that the proposed solution can enhance average spectral efficiency by up to 50% compared to fixed tilting, with negligible signaling overheads. The key advantage of the proposed solution is its potential for autonomous and distributed implementation
Will SDN be part of 5G?
For many, this is no longer a valid question and the case is considered
settled with SDN/NFV (Software Defined Networking/Network Function
Virtualization) providing the inevitable innovation enablers solving many
outstanding management issues regarding 5G. However, given the monumental task
of softwarization of radio access network (RAN) while 5G is just around the
corner and some companies have started unveiling their 5G equipment already,
the concern is very realistic that we may only see some point solutions
involving SDN technology instead of a fully SDN-enabled RAN. This survey paper
identifies all important obstacles in the way and looks at the state of the art
of the relevant solutions. This survey is different from the previous surveys
on SDN-based RAN as it focuses on the salient problems and discusses solutions
proposed within and outside SDN literature. Our main focus is on fronthaul,
backward compatibility, supposedly disruptive nature of SDN deployment,
business cases and monetization of SDN related upgrades, latency of general
purpose processors (GPP), and additional security vulnerabilities,
softwarization brings along to the RAN. We have also provided a summary of the
architectural developments in SDN-based RAN landscape as not all work can be
covered under the focused issues. This paper provides a comprehensive survey on
the state of the art of SDN-based RAN and clearly points out the gaps in the
technology.Comment: 33 pages, 10 figure
Resource Allocation Frameworks for Network-coded Layered Multimedia Multicast Services
The explosive growth of content-on-the-move, such as video streaming to
mobile devices, has propelled research on multimedia broadcast and multicast
schemes. Multi-rate transmission strategies have been proposed as a means of
delivering layered services to users experiencing different downlink channel
conditions. In this paper, we consider Point-to-Multipoint layered service
delivery across a generic cellular system and improve it by applying different
random linear network coding approaches. We derive packet error probability
expressions and use them as performance metrics in the formulation of resource
allocation frameworks. The aim of these frameworks is both the optimization of
the transmission scheme and the minimization of the number of broadcast packets
on each downlink channel, while offering service guarantees to a predetermined
fraction of users. As a case of study, our proposed frameworks are then adapted
to the LTE-A standard and the eMBMS technology. We focus on the delivery of a
video service based on the H.264/SVC standard and demonstrate the advantages of
layered network coding over multi-rate transmission. Furthermore, we establish
that the choice of both the network coding technique and resource allocation
method play a critical role on the network footprint, and the quality of each
received video layer.Comment: IEEE Journal on Selected Areas in Communications - Special Issue on
Fundamental Approaches to Network Coding in Wireless Communication Systems.
To appea
Towards Data-driven Simulation of End-to-end Network Performance Indicators
Novel vehicular communication methods are mostly analyzed simulatively or
analytically as real world performance tests are highly time-consuming and
cost-intense. Moreover, the high number of uncontrollable effects makes it
practically impossible to reevaluate different approaches under the exact same
conditions. However, as these methods massively simplify the effects of the
radio environment and various cross-layer interdependencies, the results of
end-to-end indicators (e.g., the resulting data rate) often differ
significantly from real world measurements. In this paper, we present a
data-driven approach that exploits a combination of multiple machine learning
methods for modeling the end-to-end behavior of network performance indicators
within vehicular networks. The proposed approach can be exploited for fast and
close to reality evaluation and optimization of new methods in a controllable
environment as it implicitly considers cross-layer dependencies between
measurable features. Within an example case study for opportunistic vehicular
data transfer, the proposed approach is validated against real world
measurements and a classical system-level network simulation setup. Although
the proposed method does only require a fraction of the computation time of the
latter, it achieves a significantly better match with the real world
evaluations
LTE Spectrum Sharing Research Testbed: Integrated Hardware, Software, Network and Data
This paper presents Virginia Tech's wireless testbed supporting research on
long-term evolution (LTE) signaling and radio frequency (RF) spectrum
coexistence. LTE is continuously refined and new features released. As the
communications contexts for LTE expand, new research problems arise and include
operation in harsh RF signaling environments and coexistence with other radios.
Our testbed provides an integrated research tool for investigating these and
other research problems; it allows analyzing the severity of the problem,
designing and rapidly prototyping solutions, and assessing them with
standard-compliant equipment and test procedures. The modular testbed
integrates general-purpose software-defined radio hardware, LTE-specific test
equipment, RF components, free open-source and commercial LTE software, a
configurable RF network and recorded radar waveform samples. It supports RF
channel emulated and over-the-air radiated modes. The testbed can be remotely
accessed and configured. An RF switching network allows for designing many
different experiments that can involve a variety of real and virtual radios
with support for multiple-input multiple-output (MIMO) antenna operation. We
present the testbed, the research it has enabled and some valuable lessons that
we learned and that may help designing, developing, and operating future
wireless testbeds.Comment: In Proceeding of the 10th ACM International Workshop on Wireless
Network Testbeds, Experimental Evaluation & Characterization (WiNTECH),
Snowbird, Utah, October 201
- …