8 research outputs found

    5G Wireless Communication Network Architecture and Its Key Enabling Technologies

    Get PDF
    The wireless mobile communication systems have developed from the second generation (2G) through to the current fourth generation (4G) wireless system, transforming from simply telephony system to a network transporting rich multimedia contents including video conferencing, 3-D gaming and in-flight broadband connectivity (IFBC) where airline crew use augmented reality headsets to address passengers personally. However, there are still many challenges that are beyond the capabilities of the 4G as the demand for higher data rate, lower latency, and mobility requirement by new wireless applications sores leading to mixed contentcentric communication service. The fifth generation (5G) wireless system has thus been suggested, and research is ongoing for its deployment beyond 2020. In this article, we investigate the various challenges of 4G and propose an indoor, outdoor segregated cellular architecture with cloudbased Radio Access Network (C-RAN) for 5G, we review some of its key emerging wireless technologies needed in meeting the new demands of users including massive multiple input multiple output (mMIMO) system, Device-to-Device (D2D), Visible Light Communication (VLC), Ultra-dense network, Spatial Modulation and Millimeter wave technology. It is also shown how the benefits of the emerging technologies can be optimized using the Software Defined Networks/Network Functions Virtualization (SDN/NFV) as a tool in C-RAN. We conclude that the new 5G wireless architecture will derive its strength from leveraging on the benefits of the emerging hardware technologies been managed by reconfigurable SDN/NFV via the C-RAN. This work will be of immense help to those who will engage in further research expedition and network operators in the search for a smooth evolution of the current state of the art networks toward 5G networks

    Radio resource allocation for overlay D2D-based vehicular communications in future wireless networks

    Get PDF
    Mobilfunknetze der nĂ€chsten Generation ermöglichen einen weitverbreiteten Einsatz von Device-to-Device Kommunikation, der direkten Kommunikation zwischen zellularen EndgerĂ€ten. FĂŒr viele AnwendungsfĂ€lle zur direkten Kommunikation zwischen EndgerĂ€ten sind eine deterministische Latenz und die hohe ZuverlĂ€ssigkeit von zentraler Bedeutung. Dienste zur direkten Kommunikation (D2D) fĂŒr in der NĂ€he befindliche EndgerĂ€te sind vielversprechend die hohen Anforderungen an Latenz und ZuverlĂ€ssigkeit fĂŒr zukĂŒnftige vertikale Anwendungen zu erfĂŒllen. Eine der herausragenden vertikalen Anwendungen ist die Fahrzeugkommunikation, bei der die Fahrzeuge sicherheitskritische Meldungen direkt ĂŒber D2D-Kommunikation austauschen, die dadurch zur Reduktion von VerkehrsunfĂ€llen und gleichzeitig von TodesfĂ€llen im Straßenverkehrt beitrĂ€gt. Neue Techniken zur effizienteren Zuweisung von Funkressourcen in der D2D-Kommunikation haben in letzter Zeit in Industrie und Wissenschaft große Aufmerksamkeit erlangt. ZusĂ€tzlich zur Allokation von Ressourcen, wird die Energieeffizienz zunehmend wichtiger, die normalerweise im Zusammenhang mit der Ressourcenallokation behandelt wird. Diese Dissertation untersucht verschiedener AnsĂ€tze der Funkressourcenzuweisung und Energieeffizienztechniken in der LTE und NR V2X Kommunikation. Im Folgenden beschreiben wir kurz die Kernideen der Dissertation. Meist zeichnen sich D2D-Anwendungen durch ein relativ geringes Datenvolumen aus, die ĂŒber Funkressourcen ĂŒbertragen werden. In LTE können diese Funkressourcen aufgrund der groben GranularitĂ€t fĂŒr die Ressourcenzuweisung nicht effizient genutzt werden. Insbesondere beim semi-persistenten Scheduling, bei dem eine Funkressource ĂŒber einen lĂ€ngeren Zeitraum im Overlay D2D festgelegt wird, sind die Funkressourcen fĂŒr solche Anwendungen nicht ausgelastet. Um dieses Problem zu lösen, wird eine hierarchische Form fĂŒr das Management der Funkressourcen, ein sogenanntes Subgranting-Schema, vorgeschlagen. Dabei kann ein nahegelegener zellularer Nutzer, der sogenannte begĂŒnstigte Nutzer, ungenutzten Funkressourcen, die durch Subgranting-Signalisierung angezeigt werden, wiederzuverwenden. Das vorgeschlagene Schema wird bewertet und mit "shortening TTI", einen Schema mit reduzierten Sendezeitintervallen, in Bezug auf den Zellendurchsatz verglichen. Als nĂ€chster Schritt wird untersucht, wie der begĂŒnstigten Benutzer ausgewĂ€hlt werden kann und als Maximierungsproblem des Zellendurchsatzes im Uplink unter BerĂŒcksichtigung von ZuverlĂ€ssigkeits- und Latenzanforderungen dargestellt. DafĂŒr wird ein heuristischer zentralisierter, d.h. dedizierter Sub-Granting-Radio-Ressource DSGRR-Algorithmus vorgeschlagen. Die Simulationsergebnisse und die Analyse ergeben in einem Szenario mit stationĂ€ren Nutzern eine Erhöhung des Zelldurchsatzes bei dem Einsatz des vorgeschlagenen DSGRR-Algorithmus im Vergleich zu einer zufĂ€lligen Auswahl von Nutzern. ZusĂ€tzlich wird das Problem der Auswahl des begĂŒnstigten Nutzers in einem dynamischen Szenario untersucht, in dem sich alle Nutzer bewegen. Wir bewerten den durch das Sub-Granting durch die MobilitĂ€t entstandenen Signalisierungs-Overhead im DSGRR. Anschließend wird ein verteilter Heuristik-Algorithmus (OSGRR) vorgeschlagen und sowohl mit den Ergebnissen des DSGRR-Algorithmus als auch mit den Ergebnissen ohne Sub-Granting verglichen. Die Simulationsergebnisse zeigen einen verbesserten Zellendurchsatz fĂŒr den OSGRR im Vergleich zu den anderen Algorithmen. Außerdem ist zu beobachten, dass der durch den OSGRR entstehende Overhead geringer ist als der durch den DSGRR, wĂ€hrend der erreichte Zellendurchsatz nahe am maximal erreichbaren Uplink-Zellendurchsatz liegt. ZusĂ€tzlich wird die Ressourcenallokation im Zusammenhang mit der Energieeffizienz bei autonomer Ressourcenauswahl in New Radio (NR) Mode 2 untersucht. Die autonome Auswahl der Ressourcen wird als VerhĂ€ltnis von Summenrate und Energieverbrauch formuliert. Das Ziel ist den Stromverbrauch der akkubetriebenen EndgerĂ€te unter BerĂŒcksichtigung der geforderten ZuverlĂ€ssigkeit und Latenz zu minimieren. Der heuristische Algorithmus "Density of Traffic-based Resource Allocation (DeTRA)" wird als Lösung vorgeschlagen. Bei dem vorgeschlagenen Algorithmus wird der Ressourcenpool in AbhĂ€ngigkeit von der Verkehrsdichte pro Verkehrsart aufgeteilt. Die zufĂ€llige Auswahl erfolgt zwingend auf dem dedizierten Ressourcenpool beim Eintreffen aperiodischer Daten. Die Simulationsergebnisse zeigen, dass der vorgeschlagene Algorithmus die gleichen Ergebnisse fĂŒr die Paketempfangsrate (PRR) erreicht, wie der sensing-basierte Algorithmus. ZusĂ€tzlich wird der Stromverbrauch des EndgerĂ€ts reduziert und damit die Energieeffizienz durch die Anwendung des DeTRA-Algorithmus verbessert. In dieser Arbeit werden Techniken zur Allokation von Funkressourcen in der LTE-basierten D2D-Kommunikation erforscht und eingesetzt, mit dem Ziel Funkressourcen effizienter zu nutzen. DarĂŒber hinaus ist der in dieser Arbeit vorgestellte Ansatz eine Basis fĂŒr zukĂŒnftige Untersuchungen, wie akkubasierte EndgerĂ€te mit minimalem Stromverbrauch in der NR-V2X-Kommunikation Funkressourcen optimal auswĂ€hlen können.Next-generation cellular networks are envisioned to enable widely Device-to-Device (D2D) communication. For many applications in the D2D domain, deterministic communication latency and high reliability are of exceptionally high importance. The proximity service provided by D2D communication is a promising feature that can fulfil the reliability and latency requirements of emerging vertical applications. One of the prominent vertical applications is vehicular communication, in which the vehicles disseminate safety messages directly through D2D communication, resulting in the fatality rate reduction due to a possible collision. Radio resource allocation techniques in D2D communication have recently gained much attention in industry and academia, through which valuable radio resources are allocated more efficiently. In addition to the resource allocation techniques, energy sustainability is highly important and is usually considered in conjunction with the resource allocation approach. This dissertation is dedicated to studying different avenues of the radio resource allocation and energy efficiency techniques in Long Term Evolution (LTE) and New Radio (NR) Vehicle-to-Everythings (V2X) communications. In the following, we briefly describe the core ideas in this study. Mostly, the D2D applications are characterized by relatively small traffic payload size, and in LTE, due to coarse granularity of the subframe, the radio resources can not be utilized efficiently. Particularly, in the case of semi-persistent scheduling when a radio resource is scheduled for a longer time in the overlay D2D, the radio resources are underutilized for such applications. To address this problem, a hierarchical radio resource management scheme, i.e., a sub-granting scheme, is proposed by which nearby cellular users, i.e., beneficiary users, are allowed to reuse the unused radio resource indicated by sub-granting signaling. The proposed scheme is evaluated and compared with shortening Transmission Time Interval (TTI) schemes in terms of cell throughput. Then, the beneficiary user selection problem is investigated and is cast as a maximization problem of uplink cell throughput subject to reliability and latency requirements. A heuristic centralized, i.e., dedicated sub-granting radio resource Dedicated Sub-Granting Radio Resource (DSGRR) algorithm is proposed to address the original beneficiary user selection problem. The simulation results and analysis show the superiority of the proposed DSGRR algorithm over the random beneficiary user selection algorithm in terms of the cell throughput in a scenario with stationary users. Further, the beneficiary user selection problem is investigated in a scenario where all users are moving in a dynamic environment. We evaluate the sub-granting signaling overhead due to mobility in the DSGRR, and then a distributed heuristics algorithm, i.e., Open Sub-Granting Radio Resource (OSGRR), is proposed and compared with the DSGRR algorithm and no sub-granting case. Simulation results show improved cell throughput for the OSGRR compared with other algorithms. Besides, it is observed that the overhead incurred by the OSGRR is less than the DSGRR while the achieved cell throughput is yet close to the maximum achievable uplink cell throughput. Also, joint resource allocation and energy efficiency in autonomous resource selection in NR, i.e. Mode 2, is examined. The autonomous resource selection is formulated as a ratio of sum-rate and energy consumption. The objective is to minimize the energy efficiency of the power-saving users subject to reliability and latency requirements. A heuristic algorithm, density of traffic-based resource allocation (DeTRA), is proposed to solve the problem. The proposed algorithm splits the resource pool based on the traffic density per traffic type. The random selection is then mandated to be performed on the dedicated resource pool upon arrival of the aperiodic traffic is triggered. The simulation results show that the proposed algorithm achieves the same packet reception ratio (PRR) value as the sensing-based algorithm. In addition, per-user power consumption is reduced, and consequently, the energy efficiency is improved by applying the DeTRA algorithm. The research in this study leverages radio resource allocation techniques in LTE based D2D communications to be utilized radio resources more efficiently. In addition, the conducted research paves a way to study further how the power-saving users would optimally select the radio resources with minimum energy consumption in NR V2X communications

    Resource management in future mobile networks: from millimetre-wave backhauls to airborne access networks

    Get PDF
    The next generation of mobile networks will connect vast numbers of devices and support services with diverse requirements. Enabling technologies such as millimetre-wave (mm-wave) backhauling and network slicing allow for increased wireless capacities and logical partitioning of physical deployments, yet introduce a number of challenges. These include among others the precise and rapid allocation of network resources among applications, elucidating the interactions between new mobile networking technology and widely used protocols, and the agile control of mobile infrastructure, to provide users with reliable wireless connectivity in extreme scenarios. This thesis presents several original contributions that address these challenges. In particular, I will first describe the design and evaluation of an airtime allocation and scheduling mechanism devised specifically for mm-wave backhauls, explicitly addressing inter-flow fairness and capturing the unique characteristics of mm-wave communications. Simulation results will demonstrate 5x throughput gains and a 5-fold improvement in fairness over recent mm-wave scheduling solutions. Second, I will introduce a utility optimisation framework targeting virtually sliced mm-wave backhauls that are shared by a number of applications with distinct requirements. Based on this framework, I will present a deep learning solution that can be trained within minutes, following which it computes rate allocations that match those obtained with state-of-the-art global optimisation algorithms. The proposed solution outperforms a baseline greedy approach by up to 62%, in terms of network utility, while running orders of magnitude faster. Third, the thesis investigates the behaviour of the Transport Control Protocol (TCP) in Long-Term Evolution (LTE) networks and discusses the implications of employing Radio Link Control (RLC) acknowledgements under different link qualities, on the performance of transport protocols. Fourth, I will introduce a reinforcement learning approach to optimising the performance of airborne cellular networks serving users in emergency settings, demonstrating rapid convergence (approx. 2.5 hours on a desktop machine) and a 5dB improvement of the median Signal-to-Noise-plus-Interference-Ratio (SINR) perceived by users, over a heuristic based benchmark solution. Finally, the thesis discusses promising future research directions that follow from the results obtained throughout this PhD project

    Potentzia domeinuko NOMA 5G sareetarako eta haratago

    Get PDF
    Tesis inglés 268 p. -- Tesis euskera 274 p.During the last decade, the amount of data carried over wireless networks has grown exponentially. Several reasons have led to this situation, but the most influential ones are the massive deployment of devices connected to the network and the constant evolution in the services offered. In this context, 5G targets the correct implementation of every application integrated into the use cases. Nevertheless, the biggest challenge to make ITU-R defined cases (eMBB, URLLC and mMTC) a reality is the improvement in spectral efficiency. Therefore, in this thesis, a combination of two mechanisms is proposed to improve spectral efficiency: Non-Orthogonal Multiple Access (NOMA) techniques and Radio Resource Management (RRM) schemes. Specifically, NOMA transmits simultaneously several layered data flows so that the whole bandwidth is used throughout the entire time to deliver more than one service simultaneously. Then, RRM schemes provide efficient management and distribution of radio resources among network users. Although NOMA techniques and RRM schemes can be very advantageous in all use cases, this thesis focuses on making contributions in eMBB and URLLC environments and proposing solutions to communications that are expected to be relevant in 6G

    Developing and Utilizing Multivariate Stochastic Wireless Channel Models

    Get PDF
    Title from PDF of title page viewed June 20, 2019Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 84-92)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2019Developing accurate channel models is paramount in designing eïŹƒcient mobile communication systems. The focus of this dissertation is to understand the small-scale fading characteristics, develop mathematical tools that accurately capture these characteristics and utilize them in three different applications - diversity receivers, scheduling, packet duplication in dual connectivity scenarios. This dissertation develops multivariate stochastic models for Rayleigh fading channels that incorporate factors such as the velocity of the users, angle of arrival distribution of signals, and carrier frequency. The developed models are more comprehensive than the existing ones. They capture the correlation characteristics of signals more accurately and are applicable to more practical scenarios. The developed models are the only ones that incorporate the spatial correlation structure suggested by 3GPP. The models are used to derive analytical expressions for the output SNR of certain diversity receivers. Owing to our expressions, the output SNR performances of these receivers are now studied through their moments. The moments provide insight about the nature of these receivers’ output SNR distribution, which is very useful in their reliability analysis. Secondly, the models are used to capture the temporal evolution of the received iii SNR. Temporal correlation characteristics of the SNR are exploited to decrease the number of variables in the downlink scheduling problem. This is achieved by making scheduling decisions less frequently for users with relatively higher coherence time. The results illustrate that the number of operations it takes to make scheduling decisions can be reduced by 33% with conïŹdence probability of 0.7 and by 58% with conïŹdence probability of 0.4. Finally, fade duration and non-fade duration characteristics of a Rayleigh fading channel are used to partially and randomly duplicate some packets when connected to multiple base stations. This is performed based on the small-scale fading statistics rather than the large-scale fading. Duplication based on large time scales can be wasteful and unnecessary, so it is shown using matrix exponential distributions how with low complexity to duplicate only when necessary. The results indicate that up to 50% of the resources at the duplicating base station can be liberated whilst meeting the target reliability measure.Introduction -- Moments of the Quadrivariate Rayleigh Distribution with applications for diversity receivers -- Reducing computational time of a wireless resource scheduler by exploiting temporal channel characteristics -- Partial packet duplication in 5G: control of fade and non-fade duration outages using matrix exponential distributions -- Conclusion and future work -- Appendix A. Derivation of the PDF of the Quadrivariate Rayleigh Distribution -- Appendix B. Derivation of the CDF of the Quadrivariate Rayleigh Distribution -- Appendix C. Derivation of the MGF of the Quadrivariate Rayleigh Distribution -- Appendix D. Derivation of bivariate SNR density -- Appendix E. Derivation of trivariate distribution density of Rayleigh Random variable

    Cell-Free Massive MIMO: Challenges and Promising Solutions

    Get PDF
    Along with its primary mission in fulfilling the communication needs of humans as well as intelligent machines, fifth generation (5G) and beyond networks will be a virtual fundamental component for all parts of life, society, and industries. These networks will pave the way towards realizing the individuals’ technological aspirations including holographic telepresence, e-Health, pervasive connectivity in smart environments, massive robotics, three-dimensional unmanned mobility, augmented reality, virtual reality, and internet of everything. This new era of applications brings unprecedented challenging demands to wireless network, such as high spectral efficiency, low-latency, high-reliable communication, and high energy efficiency. One of the major technological breakthroughs that has recently drawn the attention of researchers from academia and industry to cope with these unprecedented demands of wireless networks is the cell-free (CF) massive multiple-input multiple-output (mMIMO) systems. In CF mMIMO, a large number of spatially distributed access points are connected to a central processing unit (CPU). The CPU operates all APs as a single mMIMO network with no cell boundaries to serve a smaller number of users coherently on the same time-frequency resources. The system has shown substantial gains in improving the network performance from different perspectives, especially for cell-edge users, compared it other candidate technologies for 5G networks, \ie co-located mMIMO and small-cell (SC) systems. Nevertheless, the full picture of a practical scalable deployment of the system is not clear yet. In this thesis, we provide more in-depth investigations on the CF mMIMO performance under various practical system considerations. Also, we provide promising solutions to fully realize the potential of CF mMIMO in practical scenarios. In this regard, we focus on three vital practical challenges, namely hardware and channel impairments, malicious attacks, and limited-capacity fronthaul network. Regarding the hardware and channel impairments, we analyze the CF mMIMO performance under such practical considerations and compare its performance with SC systems. In doing so, we consider that both APs and user equipment (UE)s are equipped with non-ideal hardware components. Also, we consider the Doppler shift effect as a source of channel impairments in dynamic environments with moving users. Then, we derive novel closed-form expressions for the downlink (DL) spectral efficiency of both systems under hardware distortions and Doppler shift effect. We reveal that the effect of non-ideal UEs is more prominent than the non-ideal APs. Also, while increasing the number of deployed non-ideal APs can limit the hardware distortion effect in CF mMIMO systems, this leads to an extra performance loss in SC systems. Besides, we show that the Doppler shift effect is more harsh in SC systems. In addition, the SC system operation is more suitable for low-velocity users, however, it is more beneficial to adopt CF mMIMO system for network operation under high-mobility conditions. Capitalizing on the latter, we propose a hybrid CF mMIMO/SC system that can significantly outperforms both CF mMIMO and SC systems by providing different mobility conditions with high data rates simultaneously. Towards a further improvement in the CF mMIMO performance under high mobility scenarios, we propose a novel framework to limit the performance degradation due to the Doppler shift effect. To this end, we derive novel expressions for tight lower bound of the average DL and uplink (UL) data rates. Capitalizing on the derived analytical results, we provide an analytical framework that optimizes the frame length to minimize the Doppler shift effect on DL and UL data rates according to some criterion. Our results reveal that the optimal frame lengths for maximizing the DL and UL data rates are different and depend mainly on the users' velocities. Besides, adapting the frame length according to the velocity conditions significantly limits the Doppler shift effect, compared to applying a fixed frame length. To empower the CF mMIMO systems with secure transmission against malicious attacks, we propose two different approaches that significantly increases the achievable secrecy rates. In the first approach, we introduce a novel secure DL transmission technique that efficiently limits the eavesdropper (Eve) capability in decoding the transmitted signals to legitimate users. Differently, in the second approach, we adopt the distinctive features of Reconfigurable intelligent surfaces (RIS)s to limit the information leakage towards the Eve. Regarding the impact of limited capacity of wired-based fronthaul links, we drive the achievable DL data rates assuming two different CF mMIMO system operations, namely, distributed and centralized system operations. APs and CPU are the responsible entities for carrying out the signal processing functionalities in the distributed and centralized system operations, respectively. We show that the impact of limited capacity fronthaul links is more prominent on the centralized system operation. In addition, while the distributed system operation is more preferable under low capacities of fronthaul links, the centralized counterpart attains superior performance at high capacities of fronthaul links. Furthermore, considering the distributed and centralized system operations, and towards a practical and scalable operation of CF mMIMO systems, we propose a wireless-based fronthaul network for CF mMIMO systems under three different operations, namely, microwave-based, mmWave-based, and hybrid mmWave/microwave. Our results show that the integration between the centralized operation and the hybrid-based fronthaul network provides the highest DL data rates when APs are empowered with signal decoding capabilities. However, integrating the distributed operation with the microwave-based fronthaul network achieves ultimate performance when APs are not supported with decoding capabilities

    Performance of Massive MIMO with Interference Decoding

    Get PDF
    In a massive MIMO system, base stations (BS) utilize a large number of antennas to simultaneously serve several (single or multi-antenna) users at once, where the number of BS antennas is normally assumed to be significantly larger than the number of users. In massive MIMO systems operating in time division duplex (TDD) mode, the channel state information (CSI) is estimated via uplink pilot sequences that are orthogonal in a cell but re-used in other cells. Re-using the pilots, however, contaminates the CSI estimate at BSs by the channel of the users sharing the same pilot in other cells; thus causing pilot contamination which creates coherent interference that, as the number of BS antennas grows, scales at the same rate as the desired signal. Hence, in the asymptotic limits of large antennas, the effects of non-coherent interference terms and noise disappear, except for the pilot contamination interference. A common technique used in the literature to deal with this interference is to treat it as noise (TIN). When using TIN, users' throughput will converge to a constant and thus the benefits of using an ever greater number of BS antennas saturate. However, it is known that the use of TIN in interference networks is only preferred in the weak interference regime, and it is sub-optimal in other regimes (e.g., moderate or strong interference). In this thesis, we show that as the number of BS antennas increases, the pilot contamination interference is no longer weak, and therefore it is beneficial to treat it differently (e.g., decode it jointly with the desired signal) to improve users’ throughput. In the first part of the thesis, we study the performance of interference decoding schemes based on simultaneous unique decoding (SD) and simultaneous non-unique decoding (SND), and show that by doing so the rate saturation effect is eliminated as the number of antennas increases; hence, the per-user rates grow unbounded. We analytically study the performance of two well-known linear combining/precoding methods, namely, MRC/MRT and ZF, for spatially correlated/uncorrelated Rayleigh fading channel models, and obtain closed-form expressions of rate lower bounds for these using a worst-case uncorrelated noise technique for multi-user channels. We compare the performance of the different interference management schemes, TIN/SD/SND, based on the maximum symmetric rate they can offer to the users. Specifically, we first obtain structural results for a symmetric two-cell setting as well as the high SINR regime, that provide insights into the benefits of using interference decoding schemes in different regimes of number of BS antennas. We numerically illustrate the performance of the different schemes and show that with a practical number of antennas, SND strictly outperforms TIN. This gain improves with increasing the number of antennas, and also ZF performs significantly better than MRC/MRT due to better mitigation of multi-user interference. Furthermore, we study the performance of regularized ZF (RZF) via Monte Carlo simulations, and observe that it achieves better rates than ZF for moderately small number of antennas only. Lastly, we numerically investigate the impact of increasing the number of cells, the cell radius, the number of users, the correlation of the channel across antennas and the degree of shadow fading on system performance. In the second part of the thesis, we study the performance of partial interference decoding based on rate splitting (RS) and non-unique decoding. Specifically, we propose to partition each user’s message into two independent layers, and partially decode the pilot contamination interference while treating the remaining part as noise based on a power splitting strategy. In particular, for a two-cell system, we investigate the benefits of an RS scheme based on the celebrated Han-Kobayashi (HK) region, which provides the best known achievable performance for a two-user interference channel (IC). In the case of more than two cells, we propose a generalized RS scheme that non-uniquely decodes each layer of the pilot contamination interference and uses only one power splitting coefficient per IC. In addition, we establish an achievable region for this generalized RS scheme using the non-unique decoding technique. In both cases of two cells and more than two cells and for a practical number of antennas, we numerically study the performance of the proposed RS schemes by numerically optimizing the power splitting coefficients, and show that they achieve significantly higher rates than TIN/SD/SND in all scenarios. Similar to the first part of the thesis, we also numerically examine the impact of increasing the number of cells, the cell radius, the number of users, the correlation of the channel across antennas and the degree of shadow fading on the performance of the RS schemes. Lastly, our simulation results reveal that by replacing the numerically optimized values of the power splitting coefficients with their pre-computed average values (over a large number of realizations), the performance loss is quite negligible, thus reducing the optimization complexity
    corecore