12 research outputs found
Blind CSI acquisition for multi-antenna interference mitigation in 5G networks
Future wireless communication networks are required to satisfy the increasing demands
of traffic and capacity. The upcoming fifth generation (5G) of the cellular
technology is expected to meet 1000 times the capacity that of the current fourth
generation (4G). These tight specifications introduce a new set of research challenges.
However, interference has always been the bottleneck in cellular communications.
Thus, towards the vision of the 5G, massive multi-input multi-output (mMIMO) and
interference alignment (IA) are key transmission technologies to fulfil the future requirements,
by controlling the residual interference.
By equipping the base-station (BS) with a large number of transmit antennas, e.g,
tens of hundreds of antennas, a mMIMO system can theoretically achieve significant
capacity with limited interference, where many user equipment (UEs) can be served
simultaneously at the same time and frequency resources. A mMIMO offers great
spatial degrees of freedom (DoFs), which boost the total network capacity without
increasing transmission power or bandwidth. However, the majority of the recent
mMIMO investigations are based on theoretical channels with independent and identically
distributed (i.i.d) Gaussian distribution, which facilitates the computation of
closed-form rate expressions. Nonetheless, practical channels are not spatially uncorrelated,
where the BS receives different power ratios across different spatial directions
between the same transmitting and receiving antennas. Thus, it is important to understand the behavior of such new technology with practical channel modeling.
Alternatively, IA is known to break the bottleneck between the capacity of the
network and the overall spectral efficiency (SE), where a performance degradation
is observed at a certain level of connected user capacity, due to the overwhelming
inter-user interference. Theoretically, IA guarantees a linear relationship between
half of the overall network SE and the online capacity by aligning interference from
all transmitters inside one spatial signal subspace, leaving the other subspace for
desired transmission. However, IA has tight feasibility conditions in practice including
high precision channel state information at transmitter (CSIT), which leads to severe
feedback overhead.
In this thesis, high-precision blind CSIT algorithms are developed under different
transmission technologies. We first consider the CSIT acquisition problem in MIMO
IA systems. Proposed spatial channel estimation for MIMO-IA systems (SCEIA)
shows great offered spatial degrees of freedom which contributes to approaching the
performance of the perfect-CSIT case, without the requirements of channel quantization
or user feedback overhead. In massive MIMO setups, proposed CSIT strategy
offered scalable performance with the number of the transmit antennas. The effect
of the non-stationary channel characteristics, which appears with very large antenna
arrays, is minimized due to the effective scanning precision of the proposed strategy.
Finally, we extend the system model to the full dimensional space, where users are distributed
across the two dimensions of the cell space (azimuthal/elevation). Proposed
directional spatial channel estimation (D-SCE) scans the 3D cell space and effectively
attains additional CSIT and beamforming gains. In all cases, a list of comparisons
with state-of-the-art schemes from academia and industry is performed to show the
performance improvement of the proposed CSIT strategies
Recommended from our members
Improving next-generation wireless network performance and reliability with deep learning
A rudimentary question whether machine learning in general, or deep learning in particular, could add to the well-established field of wireless communications, which has been evolving for close to a century, is often raised. While the use of deep learning based methods is likely to help build intelligent wireless solutions, this use becomes particularly challenging for the lower layers in the wireless communication stack. The introduction of the fifth generation of wireless communications (5G) has triggered the demand for “network intelligence” to support its promises for very high data rates and extremely low latency. Consequently, 5G wireless operators are faced with the challenges of network complexity, diversification of services, and personalized user experience. Industry standards have created enablers (such as the network data analytics function), but these enablers focus on post-mortem analysis at higher stack layers and have a periodicity in the time scale of seconds (or larger). The goal of this dissertation is to show a solution for these challenges and how a data-driven approach using deep learning could add to the field of wireless communications. In particular, I propose intelligent predictive and prescriptive abilities to boost reliability and eliminate performance bottlenecks in 5G cellular networks and beyond, show contributions that justify the value of deep learning in wireless communications across several different layers, and offer in-depth analysis and comparisons with baselines and industry standards. First, to improve multi-antenna network reliability against wireless impairments with power control and interference coordination for both packetized voice and beamformed data bearers, I propose the use of a joint beamforming, power control, and interference coordination algorithm based on deep reinforcement learning. This algorithm uses a string of bits and logic operations to enable simultaneous actions to be performed by the reinforcement learning agent. Consequently, a joint reward function is also proposed. I compare the performance of my proposed algorithm with the brute force approach and show that similar performance is achievable but with faster run-time as the number of transmit antennas increases. Second, in enhancing the performance of coordinated multipoint, I propose the use of deep learning binary classification to learn a surrogate function to trigger a second transmission stream instead of depending on the popular signal to interference plus noise measurement quantity. This surrogate function improves the users' sum-rate through focusing on pre-logarithmic terms in the sum-rate formula, which have larger impact on this rate. Third, performance of band switching can be improved without the need for a full channel estimation. My proposal of using deep learning to classify the quality of two frequency bands prior to granting the band switching leads to a significant improvement in users' throughput. This is due to the elimination of the industry standard measurement gap requirement—a period of silence where no data is sent to the users so they could measure the frequency bands before switching. In this dissertation, a group of algorithms for wireless network performance and reliability for downlink are proposed. My results show that the introduction of user coordinates enhance the accuracy of the predictions made with deep learning. Also, the choice of signal to interference plus noise ratio as the optimization objective may not always be the best choice to improve user throughput rates. Further, exploiting the spatial correlation of channels in different frequency bands can improve certain network procedures without the need for perfect knowledge of the per-band channel state information. Hence, an understanding of these results help develop novel solutions to enhancing these wireless networks at a much smaller time scale compared to the industry standards todayElectrical and Computer Engineerin
Towards reliable communication in LTE-A connected heterogeneous machine to machine network
Machine to machine (M2M) communication is an emerging technology that enables heterogeneous devices to communicate with each other without human intervention and thus forming so-called Internet of Things (IoTs). Wireless cellular networks (WCNs) play a significant role in the successful deployment of M2M communication. Specially the ongoing massive deployment of long term evolution advanced (LTE-A) makes it possible to establish machine type communication (MTC) in most urban and remote areas, and by using LTE-A backhaul network, a seamless network communication is being established between MTC-devices and-applications. However, the extensive network coverage does not ensure a successful implementation of M2M communication in the LTE-A, and therefore there are still some challenges.
Energy efficient reliable transmission is perhaps the most compelling demand for various M2M applications. Among the factors affecting reliability of M2M communication are the high endto-end delay and high bit error rate. The objective of the thesis is to provide reliable M2M communication in LTE-A network. In this aim, to alleviate the signalling congestion on air interface and efficient data aggregation we consider a cluster based architecture where the MTC devices are grouped into number of clusters and traffics are forwarded through some special nodes called cluster heads (CHs) to the base station (BS) using single or multi-hop transmissions. In many deployment scenarios, some machines are allowed to move and change their location in the deployment area with very low mobility. In practice, the performance of data transmission often degrades with the increase of distance between neighboring CHs. CH needs to be reselected in such cases. However, frequent re-selection of CHs results in counter effect on routing and reconfiguration of resource allocation associated with CH-dependent protocols. In addition, the link quality between a CH-CH and CH-BS are very often affected by various dynamic environmental factors such as heat and humidity, obstacles and RF interferences. Since CH aggregates the traffic from all cluster members, failure of the CH means that the full cluster will fail. Many solutions have been proposed to combat with error prone wireless channel such as automatic repeat request (ARQ) and multipath routing. Though the above mentioned techniques improve the communication reliability but intervene the communication efficiency. In the former scheme, the transmitter retransmits the whole packet even though the part of the packet has been received correctly and in the later one, the receiver may receive the same information from multiple paths; thus both techniques are bandwidth and energy inefficient. In addition, with retransmission, overall end to end delay may exceed the maximum allowable delay budget.
Based on the aforementioned observations, we identify CH-to-CH channel is one of the bottlenecks to provide reliable communication in cluster based multihop M2M network and present a full solution to support fountain coded cooperative communications. Our solution covers many aspects from relay selection to cooperative formation to meet the user’s QoS requirements. In the first part of the thesis, we first design a rateless-coded-incremental-relay selection (RCIRS) algorithm based on greedy techniques to guarantee the required data rate with a minimum cost. After that, we develop fountain coded cooperative communication protocols to facilitate the data transmission between two neighbor CHs. In the second part, we propose joint network and fountain coding schemes for reliable communication. Through coupling channel coding and network coding simultaneously in the physical layer, joint network and fountain coding schemes efficiently exploit the redundancy of both codes and effectively combat the detrimental effect of fading conditions in wireless channels. In the proposed scheme, after correctly decoding the information from different sources, a relay node applies network and fountain coding on the received signals and then transmits to the destination in a single transmission. Therefore, the proposed schemes exploit the diversity and coding gain to improve the system performance. In the third part, we focus on the reliable uplink transmission between CHs and BS where CHs transmit to BS directly or with the help of the LTE-A relay nodes (RN). We investigate both type-I and type-II enhanced LTE-A networks and propose a set of joint network and fountain coding schemes to enhance the link robustness.
Finally, the proposed solutions are evaluated through extensive numerical simulations and the numerical results are presented to provide a comparison with the related works found in the literature
Inter-cell interference mitigation in LTE-advanced heterogeneous mobile networks
Heterogeneous Networks are one of the most effective solutions for enhancing the network performance of mobile systems, by deploying small cells within the coverage of the ordinary Macro cells. The goals of deploying such networks are to offload data from the possibly congested Macro cells towards the small cells and to achieve enhancements for outdoor/ indoor coverage in a cost-effective way. Moreover, heterogeneous networks aim to maximise the system capacity and to provide lower interference by reducing the distance between the transmitter and the receiver. However, inter-cell interference is a major technical challenge in heterogeneous networks, which mainly affects system performance and may cause a significant degradation in network throughput (especially for the edge users) in co-channel deployment. So, to overcome the aforementioned problem, both researchers and telecommunication operators are required to develop effective approaches that adapt different mobile system scenarios. The research study presented in this thesis provides a novel interference mitigation scheme, based on power control and time-domain inter-cell interference coordination to improve cell and users’ throughputs. In addition, powerful scheduling algorithms have been developed and optimised to adapt the proposed scheme for both macro and small cells. It is responsible for the optimum resource allocation to minimise the inter-cell interference to the minimum ranges. The focus of this work is for downlink inter-cell interference in Long Term Evolution (LTE- Advanced) mobile networks, as an example of OFDMA (orthogonal frequency division multiple access)-based networks. More attention is paid to the Pico cell as an important cell type in heterogeneous deployment, due to the direct backhauling with the macro cell to coordinate the resource allocation among cells tightly and efficiently. The intensive simulations and results analyses show that the proposed scheme demonstrates better performance with less complexity in terms of user and cell throughputs, and spectral efficiency, as compared with the previously employed schem