191 research outputs found
Load balancing using cell range expansion in LTE advanced heterogeneous networks
The use of heterogeneous networks is on the increase, fueled by consumer demand for more data. The main objective of heterogeneous networks is to increase capacity. They offer solutions for efficient use of spectrum, load balancing and improvement of cell edge coverage amongst others. However, these solutions have inherent challenges such as inter-cell interference and poor mobility management. In heterogeneous networks there is transmit power disparity between macro cell and pico cell tiers, which causes load imbalance between the tiers. Due to the conventional user-cell association strategy, whereby users associate to a base station with the strongest received signal strength, few users associate to small cells compared to macro cells. To counter the effects of transmit power disparity, cell range expansion is used instead of the conventional strategy. The focus of our work is on load balancing using cell range expansion (CRE) and network utility optimization techniques to ensure fair sharing of load in a macro and pico cell LTE Advanced heterogeneous network. The aim is to investigate how to use an adaptive cell range expansion bias to optimize Pico cell coverage for load balancing. Reviewed literature points out several approaches to solve the load balancing problem in heterogeneous networks, which include, cell range expansion and utility function optimization. Then, we use cell range expansion, and logarithmic utility functions to design a load balancing algorithm. In the algorithm, user and base station associations are optimized by adapting CRE bias to pico base station load status. A price update mechanism based on a suboptimal solution of a network utility optimization problem is used to adapt the CRE bias. The price is derived from the load status of each pico base station. The performance of the algorithm was evaluated by means of an LTE MATLAB toolbox. Simulations were conducted according to 3GPP and ITU guidelines for modelling heterogeneous networks and propagation environment respectively. Compared to a static CRE configuration, the algorithm achieved more fairness in load distribution. Further, it achieved a better trade-off between cell edge and cell centre user throughputs. [Please note: this thesis file has been deferred until December 2016
Resource and power management in next generation networks
The limits of today’s cellular communication systems are constantly being tested by
the exponential increase in mobile data traffic, a trend which is poised to continue
well into the next decade. Densification of cellular networks, by overlaying smaller
cells, i.e., micro, pico and femtocells, over the traditional macrocell, is seen as an
inevitable step in enabling future networks to support the expected increases in data
rate demand. Next generation networks will most certainly be more heterogeneous
as services will be offered via various types of points of access (PoAs). Indeed, besides
the traditional macro base station, it is expected that users will also be able to
access the network through a wide range of other PoAs: WiFi access points, remote
radio-heads (RRHs), small cell (i.e., micro, pico and femto) base stations or even
other users, when device-to-device (D2D) communications are supported, creating
thus a multi-tiered network architecture. This approach is expected to enhance the
capacity of current cellular networks, while patching up potential coverage gaps.
However, since available radio resources will be fully shared, the inter-cell interference
as well as the interference between the different tiers will pose a significant
challenge. To avoid severe degradation of network performance, properly managing
the interference is essential. In particular, techniques that mitigate interference such
Inter Cell Interference Coordination (ICIC) and enhanced ICIC (eICIC) have been
proposed in the literature to address the issue. In this thesis, we argue that interference
may be also addressed during radio resource scheduling tasks, by enabling
the network to make interference-aware resource allocation decisions.
Carrier aggregation technology, which allows the simultaneous use of several
component carriers, on the other hand, targets the lack of sufficiently large portions
of frequency spectrum; a problem that severely limits the capacity of wireless networks.
The aggregated carriers may, in general, belong to different frequency bands,
and have different bandwidths, thus they also may have very different signal propagation
characteristics. Integration of carrier aggregation in the network introduces
additional tasks and further complicates interference management, but also opens
up a range of possibilities for improving spectrum efficiency in addition to enhancing
capacity, which we aim to exploit. In this thesis, we first look at the resource allocation in problem in dense multitiered
networks with support for advanced features such as carrier aggregation and
device-to-device communications. For two-tiered networks with D2D support, we
propose a centralised, near optimal algorithm, based on dynamic programming principles,
that allows a central scheduler to make interference and traffic-aware scheduling
decisions, while taking into consideration the short-lived nature of D2D links.
As the complexity of the central scheduler increases exponentially with the number
of component carriers, we further propose a distributed heuristic algorithm to tackle
the resource allocation problem in carrier aggregation enabled dense networks. We
show that the solutions we propose perform significantly better than standard solutions
adopted in cellular networks such as eICIC coupled with Proportional Fair
scheduling, in several key metrics such as user throughput, timely delivery of content
and spectrum and energy efficiency, while ensuring fairness for backward compatible
devices.
Next, we investigate the potentiality to enhance network performance by enabling
the different nodes of the network to reduce and dynamically adjust the
transmit power of the different carriers to mitigate interference. Considering that
the different carriers may have different coverage areas, we propose to leverage this
diversity, to obtain high-performing network configurations. Thus, we model the
problem of carrier downlink transmit power setting, as a competitive game between
teams of PoAs, which enables us to derive distributed dynamic power setting algorithms.
Using these algorithms we reach stable configurations in the network,
known as Nash equilibria, which we show perform significantly better than fixed
power strategies coupled with eICIC
User Association in 5G Networks: A Survey and an Outlook
26 pages; accepted to appear in IEEE Communications Surveys and Tutorial
Resource and Mobility Management in the Network Layer of 5G Cellular Ultra-Dense Networks
© 2017 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] The provision of very high capacity is one of the big challenges of the 5G cellular technology. This challenge will not be met using traditional approaches like increasing spectral efficiency and bandwidth, as witnessed in previous technology generations. Cell densification will play a major role thanks to its ability to increase the spatial reuse of the available resources. However, this solution is accompanied by some additional management challenges. In this article, we analyze and present the most promising solutions identified in the METIS project for the most relevant network layer challenges of cell densification: resource, interference and mobility management.This work was performed in the framework of the FP7 project ICT-317669 METIS, which is partly funded by the European Union. The authors would like to acknowledge the contributions of their colleagues in METIS, although the views expressed are those of the authors and do not necessarily represent the project.Calabuig Soler, D.; Barmpounakis, S.; Giménez Colás, S.; Kousaridas, A.; Lakshmana, TR.; Lorca, J.; Lunden, P.... (2017). Resource and Mobility Management in the Network Layer of 5G Cellular Ultra-Dense Networks. IEEE Communications Magazine. 55(6):162-169. https://doi.org/10.1109/MCOM.2017.1600293S16216955
Harmonized Cellular and Distributed Massive MIMO: Load Balancing and Scheduling
Multi-tier networks with large-array base stations (BSs) that are able to
operate in the "massive MIMO" regime are envisioned to play a key role in
meeting the exploding wireless traffic demands. Operated over small cells with
reciprocity-based training, massive MIMO promises large spectral efficiencies
per unit area with low overheads. Also, near-optimal user-BS association and
resource allocation are possible in cellular massive MIMO HetNets using simple
admission control mechanisms and rudimentary BS schedulers, since scheduled
user rates can be predicted a priori with massive MIMO.
Reciprocity-based training naturally enables coordinated multi-point
transmission (CoMP), as each uplink pilot inherently trains antenna arrays at
all nearby BSs. In this paper we consider a distributed-MIMO form of CoMP,
which improves cell-edge performance without requiring channel state
information exchanges among cooperating BSs. We present methods for harmonized
operation of distributed and cellular massive MIMO in the downlink that
optimize resource allocation at a coarser time scale across the network. We
also present scheduling policies at the resource block level which target
approaching the optimal allocations. Simulations reveal that the proposed
methods can significantly outperform the network-optimized cellular-only
massive MIMO operation (i.e., operation without CoMP), especially at the cell
edge
Separation Framework: An Enabler for Cooperative and D2D Communication for Future 5G Networks
Soaring capacity and coverage demands dictate that future cellular networks
need to soon migrate towards ultra-dense networks. However, network
densification comes with a host of challenges that include compromised energy
efficiency, complex interference management, cumbersome mobility management,
burdensome signaling overheads and higher backhaul costs. Interestingly, most
of the problems, that beleaguer network densification, stem from legacy
networks' one common feature i.e., tight coupling between the control and data
planes regardless of their degree of heterogeneity and cell density.
Consequently, in wake of 5G, control and data planes separation architecture
(SARC) has recently been conceived as a promising paradigm that has potential
to address most of aforementioned challenges. In this article, we review
various proposals that have been presented in literature so far to enable SARC.
More specifically, we analyze how and to what degree various SARC proposals
address the four main challenges in network densification namely: energy
efficiency, system level capacity maximization, interference management and
mobility management. We then focus on two salient features of future cellular
networks that have not yet been adapted in legacy networks at wide scale and
thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP), and
device-to-device (D2D) communications. After providing necessary background on
CoMP and D2D, we analyze how SARC can particularly act as a major enabler for
CoMP and D2D in context of 5G. This article thus serves as both a tutorial as
well as an up to date survey on SARC, CoMP and D2D. Most importantly, the
article provides an extensive outlook of challenges and opportunities that lie
at the crossroads of these three mutually entangled emerging technologies.Comment: 28 pages, 11 figures, IEEE Communications Surveys & Tutorials 201
An Innovative RAN Architecture for Emerging Heterogeneous Networks: The Road to the 5G Era
The global demand for mobile-broadband data services has experienced phenomenal growth over the last few years, driven by the rapid proliferation of smart devices such as smartphones and tablets. This growth is expected to continue unabated as mobile data traffic is predicted to grow anywhere from 20 to 50 times over the next 5 years. Exacerbating the problem is that such unprecedented surge in smartphones usage, which is characterized by frequent short on/off connections and mobility, generates heavy signaling traffic load in the network signaling storms . This consumes a disproportion amount of network resources, compromising network throughput and efficiency, and in extreme cases can cause the Third-Generation (3G) or 4G (long-term evolution (LTE) and LTE-Advanced (LTE-A)) cellular networks to crash.
As the conventional approaches of improving the spectral efficiency and/or allocation additional spectrum are fast approaching their theoretical limits, there is a growing consensus that current 3G and 4G (LTE/LTE-A) cellular radio access technologies (RATs) won\u27t be able to meet the anticipated growth in mobile traffic demand. To address these challenges, the wireless industry and standardization bodies have initiated a roadmap for transition from 4G to 5G cellular technology with a key objective to increase capacity by 1000Ã? by 2020 . Even though the technology hasn\u27t been invented yet, the hype around 5G networks has begun to bubble. The emerging consensus is that 5G is not a single technology, but rather a synergistic collection of interworking technical innovations and solutions that collectively address the challenge of traffic growth.
The core emerging ingredients that are widely considered the key enabling technologies to realize the envisioned 5G era, listed in the order of importance, are: 1) Heterogeneous networks (HetNets); 2) flexible backhauling; 3) efficient traffic offload techniques; and 4) Self Organizing Networks (SONs). The anticipated solutions delivered by efficient interworking/ integration of these enabling technologies are not simply about throwing more resources and /or spectrum at the challenge. The envisioned solution, however, requires radically different cellular RAN and mobile core architectures that efficiently and cost-effectively deploy and manage radio resources as well as offload mobile traffic from the overloaded core network.
The main objective of this thesis is to address the key techno-economics challenges facing the transition from current Fourth-Generation (4G) cellular technology to the 5G era in the context of proposing a novel high-risk revolutionary direction to the design and implementation of the envisioned 5G cellular networks. The ultimate goal is to explore the potential and viability of cost-effectively implementing the 1000x capacity challenge while continuing to provide adequate mobile broadband experience to users. Specifically, this work proposes and devises a novel PON-based HetNet mobile backhaul RAN architecture that: 1) holistically addresses the key techno-economics hurdles facing the implementation of the envisioned 5G cellular technology, specifically, the backhauling and signaling challenges; and 2) enables, for the first time to the best of our knowledge, the support of efficient ground-breaking mobile data and signaling offload techniques, which significantly enhance the performance of both the HetNet-based RAN and LTE-A\u27s core network (Evolved Packet Core (EPC) per 3GPP standard), ensure that core network equipment is used more productively, and moderate the evolving 5G\u27s signaling growth and optimize its impact.
To address the backhauling challenge, we propose a cost-effective fiber-based small cell backhaul infrastructure, which leverages existing fibered and powered facilities associated with a PON-based fiber-to-the-Node/Home (FTTN/FTTH)) residential access network. Due to the sharing of existing valuable fiber assets, the proposed PON-based backhaul architecture, in which the small cells are collocated with existing FTTN remote terminals (optical network units (ONUs)), is much more economical than conventional point-to-point (PTP) fiber backhaul designs. A fully distributed ring-based EPON architecture is utilized here as the fiber-based HetNet backhaul. The techno-economics merits of utilizing the proposed PON-based FTTx access HetNet RAN architecture versus that of traditional 4G LTE-A\u27s RAN will be thoroughly examined and quantified. Specifically, we quantify the techno-economics merits of the proposed PON-based HetNet backhaul by comparing its performance versus that of a conventional fiber-based PTP backhaul architecture as a benchmark.
It is shown that the purposely selected ring-based PON architecture along with the supporting distributed control plane enable the proposed PON-based FTTx RAN architecture to support several key salient networking features that collectively significantly enhance the overall performance of both the HetNet-based RAN and 4G LTE-A\u27s core (EPC) compared to that of the typical fiber-based PTP backhaul architecture in terms of handoff capability, signaling overhead, overall network throughput and latency, and QoS support. It will also been shown that the proposed HetNet-based RAN architecture is not only capable of providing the typical macro-cell offloading gain (RAN gain) but also can provide ground-breaking EPC offloading gain.
The simulation results indicate that the overall capacity of the proposed HetNet scales with the number of deployed small cells, thanks to LTE-A\u27s advanced interference management techniques. For example, if there are 10 deployed outdoor small cells for every macrocell in the network, then the overall capacity will be approximately 10-11x capacity gain over a macro-only network. To reach the 1000x capacity goal, numerous small cells including 3G, 4G, and WiFi (femtos, picos, metros, relays, remote radio heads, distributed antenna systems) need to be deployed indoors and outdoors, at all possible venues (residences and enterprises)
- …