113 research outputs found
Leveraging intelligence from network CDR data for interference aware energy consumption minimization
Cell densification is being perceived as the panacea for the imminent capacity crunch. However, high aggregated energy consumption and increased inter-cell interference (ICI) caused by densification, remain the two long-standing problems. We propose a novel network orchestration solution for simultaneously minimizing energy consumption and ICI in ultra-dense 5G networks. The proposed solution builds on a big data analysis of over 10 million CDRs from a real network that shows there exists strong spatio-temporal predictability in real network traffic patterns. Leveraging this we develop a novel scheme to pro-actively schedule radio resources and small cell sleep cycles yielding substantial energy savings and reduced ICI, without compromising the users QoS. This scheme is derived by formulating a joint Energy Consumption and ICI minimization problem and solving it through a combination of linear binary integer programming, and progressive analysis based heuristic algorithm. Evaluations using: 1) a HetNet deployment designed for Milan city where big data analytics are used on real CDRs data from the Telecom Italia network to model traffic patterns, 2) NS-3 based Monte-Carlo simulations with synthetic Poisson traffic show that, compared to full frequency reuse and always on approach, in best case, proposed scheme can reduce energy consumption in HetNets to 1/8th while providing same or better Qo
Implementação e avaliação no system generator de um sistema cooperativo para os futuros sistemas 5G
With the arrival of 5G it is expected the proliferation of services in the
different fields such as healthcare, utility applications, industrial automation,
4K streaming, that the former networks can not provide. Additionally,
the total number of wireless communication devices will escalate in such
a manner that the already scarce available frequency bandwidth won’t be
enough to pack the intended objectives. Cisco’s Annual Internet Report from
2018 predicts that by 2023 there will be nearly 30 billion devices capable of
wireless communication. Due to the exponential expiation of both services
and devices, the challenges upon both network data capacity and efficient
radio resourse use will be greater than ever, thus the urgency for solutions
is grand.
Both the capacity for wireless communications and spectral efficiency are
related to cell size and its users proximity to the access point. Thus,
shortening the distance between the transmitter and the receiver improves
both aspects of the network. This concept is what motivates the
implementation of heterogeneous networks, HetNets, that are composed
of many different small-cells, SCs, overlaid across the same coexisting
area of a conventional macro-cell, shortening the distance between the
cell users and its access point transceivers, granting a better coverage and
higher data rates. However, the HetNets potential does not come without
any challenges, as these networks suffer considerably from communication
interference between cells.
Although some interference management algorithms that allow coexistence
between cells have been proposed in recent years, most of them were
evaluated by software simulations and not implemented in real-time
platforms. Therefore, this master thesis aims to give the first step on the
implementation and evaluation of an interference mitigation technique in
hardware. Specifically, it is assumed a downlink scenario composed by a
macro-cell base station, a macro-cell primary user and a small cell user,
with the aim of implementing an algorithm that eliminates the downlink
interference that the base station may cause to the secondary users. The
study was carried out using the System Generator DSP tool, which is a tool
that generates code for hardware from schematics created in it. This tool
also offers a wide range of blocks that help the creation, and fundamentally,
the simulation and study of the system to be implemented, before being
translated into hardware. The results obtained in this work are a faithful
representation of the behavior of the implemented system, which can be
used for a future application for FPGA.Com a chegada do 5G, espera-se a proliferação de serviços nas mais diversas
áreas tal como assistência médica, automação industrial, transmissão em
4k, que não eram possíveis nas redes das gerações anteriores. Além deste
fenómeno, o número total de dispositivos capazes de conexões wireless
aumentará de tal maneira que a escassa largura de banda disponível não
será suficiente para abranger os objetivos pretendidos. O Relatório Anual
de 2018 sobre a Internet da Cisco prevê que até 2023 haverá quase 30
bilhões de dispositivos capazes de comunicação sem fio. Devido ao aumento
exponencial de serviços e dispositivos, os desafios sobre a capacidade de
dados da rede e o udo eficiente dos recursos de rádio serão maiores que
nunca. Por estes motivos, a necessidade de soluções para estas lacunas é
enorme.
Tanto a capacidade da rede e o uso eficiente do espectro de frequências
estão relacionados ao tamanho da célula e à proximidade dos usuários com
o ponto de acesso da célula. Ao encurtar a distância entre o transmissor e
o recetor ocorre um melhoramento destes dois aspetos da rede. Este é o
principal conceito na implementação de redes heterogéneas, HetNets, que
são compostas por diversas células pequenas que coexistem na área de uma
macro célula convencional, diminuído a distância entre os utilizadores da
célula e os pontos de acesso, garantindo uma melhor cobertura e taxa de
dados mais elevadas. No entanto, o potencial das HatNets não vem sem
nenhum custo, pois estas redes sofrem consideravelmente de interferência
entre as células.
Embora nos últimos anos foram propostos alguns algoritmos que permitem
a coexistência das células, a maioria destes foi só testado em simulações
de software e não em plataformas em tempo real. Por esse motivo, esta
dissertação de mestrado visa dar o primeiro passo na implementação e
a avaliação de uma técnica de mitigação de interferência em hardware.
Mais especificamente no cenário de downlink entre uma estação base de
uma macro célula, um utilizador primário da macro célula e um utilizador
secundário de uma célula pequena, com o principal objetivo de cancelar a
interferência que a estação base possa fazer ao utilizador secundário. O
estudo foi realizado utilizando a ferramenta System Generator DSP, que é
uma ferramenta que gera código para hardware a partir de esquemáticos
criados na mesma. Esta ferramenta também oferece uma vasta gama de
blocos que ajudam a criação, e fundamentalmente, a simulação e o estudo do
sistema a implementar antes de ser traduzido para hardware. Os resultados
obtidos neste trabalho são uma fiel representação do comportamento do
sistema implementado. O quais podem ser utilizados para uma futura
aplicação para FPGA.Mestrado em Engenharia Eletrónica e Telecomunicaçõe
Comparison between Static and Dynamic Modeling Approaches for Heterogeneous Cellular Networks
In order to accommodate growing traffic demands, next generation cellular networks must become highly heterogeneous to achieve capacity gains. Heterogeneous cellular networks composed of macro base stations and low-power base stations of different types are able to improve spectral efficiency per unit area, and to eliminate coverage holes. In such networks, intelligent user association and resource allocation schemes are needed to achieve gains in performance. We focus on heterogeneous cellular networks that consist of macro and pico BSs, and study the interplay between user association and resource allocation using two modeling approaches, namely a static modeling approach and a dynamic modeling approach.
Our first study focuses on modeling heterogeneous cellular networks with a static approach. We propose a unified static framework to study the interplay of user association and resource allocation under a well-defined set of assumptions. This framework allows us to compare the performance of three resource allocation strategies: partially Shared deployment, orthogonal deployment, and co-channel deployment when the user association is optimized. We have formulated joint optimization problems that are non-linear integer programs which are NP-hard. We have, therefore, developed techniques to obtain upper bounds on the system's performance. We also propose a simple association rule that performs much better than all existing user association rules. We have used these upper bounds as benchmarks to provide many engineering insights, and to quantify how well different combinations of user association rules and resource allocation schemes perform.
Our second study focuses on modeling heterogeneous cellular networks with a dynamic modeling approach. We propose a unified framework to study the interplay of user association, resource allocation, user arrival, and delay. We select three different performance metrics: the highest possible arrival rate, the network average delay, and the delay-constrained maximum throughput, and formulate three different optimal user association problems to optimize our performance metrics. The proposed problems are non-linear integer programs which are hard to solve efficiently. We have developed numerical techniques to compute either the exact solutions or tight lower bounds to these problems. We have used these lower bounds and the exact solutions as benchmarks to provide many engineering insights, and to quantify how well different user association rules and resource allocation schemes perform.
Finally, using our numerical results, we compare the static and dynamic modeling approaches to study the robustness of our results. Our numerical results show that engineering insights on the resource allocation schemes drawn out the static study are valid in a dynamic context, and vice versa. However, the engineering insights on user association rules drawn out of the static study are not always consistent with the insights drawn out of the dynamic study.4 month
Fast and Efficient Radio Resource Allocation in Dynamic Ultra-Dense Heterogeneous Networks
Ultra-dense network (UDN) is considered as a promising technology in 5G wireless networks. In an UDN network, dynamic traffic patterns can lead to a high computational complexity and an excessive communications overhead with traditional resource allocation schemes. In this paper, a new resource allocation scheme presenting a low computational overhead and a low subband handoff rate in a dynamic ultra-dense heterogeneous network is presented. The scheme first defines a new interference estimation method that constructs network interference state map, based on which a radio resource allocation scheme is proposed. The resource allocation problem is a MAX-K cut problem and can be solved through a graph- theoretical approach. System level simulations reveal that the proposed scheme decreases the subband handoff rate by 30% with less than 3.2% network throughput degradation
- …