19 research outputs found

    Fast and Efficient Radio Resource Allocation in Dynamic Ultra-Dense Heterogeneous Networks

    Get PDF
    Ultra-dense network (UDN) is considered as a promising technology in 5G wireless networks. In an UDN network, dynamic traffic patterns can lead to a high computational complexity and an excessive communications overhead with traditional resource allocation schemes. In this paper, a new resource allocation scheme presenting a low computational overhead and a low subband handoff rate in a dynamic ultra-dense heterogeneous network is presented. The scheme first defines a new interference estimation method that constructs network interference state map, based on which a radio resource allocation scheme is proposed. The resource allocation problem is a MAX-K cut problem and can be solved through a graph- theoretical approach. System level simulations reveal that the proposed scheme decreases the subband handoff rate by 30% with less than 3.2% network throughput degradation

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Decentralized learning based indoor interference mitigation for 5G-and-beyond systems

    Get PDF
    Due to the explosive growth of data traffic and poor indoor coverage, ultra-dense network (UDN) has been introduced as a fundamental architectural technology for 5G-and-beyond systems. As the telecom operator is shifting to a plug-and-play paradigm in mobile networks, network planning and optimization become difficult and costly, especially in residential small-cell base stations (SBSs) deployment. Under this circumstance, severe inter-cell interference (ICI) becomes inevitable. Therefore, interference mitigation is of vital importance for indoor coverage in mobile communication systems. In this paper, we propose a fully distributed self-learning interference mitigation (SLIM) scheme for autonomous networks under a model-free multi-agent reinforcement learning (MARL) framework. In SLIM, individual SBSs autonomously perceive surrounding interferences and determine downlink transmit power without necessity of signaling interactions between SBSs for mitigating interferences. To tackle the dimensional disaster of joint action in the MARL model, we employ the Mean Field Theory to approximate the action value function to greatly decrease the computational complexity. Simulation results based on 3GPP dual-stripe urban model demonstrate that SLIM outperforms several existing known interference coordination schemes in mitigating interference and reducing power consumption while guaranteeing UEs' quality of service for autonomous UDNs

    Opportunistic device-to-device communication in cellular networks: from theory to practice

    Get PDF
    Mención Internacional en el título de doctorCellular service providers have been struggling with users’ demand since the emergence of mobile Internet. As a result, each generation of cellular network prevailed over its predecessors mainly in terms of connection speed. However, the fifth generation (5G) of cellular network promises to go beyond this trend by revolutionizing the network architecture. Device-to-Device (D2D) communication is one of the revolutionary changes that enables mobile users to communicate directly without traversing a base station. This feature is being actively studied in 3GPP with special focus on public safety as it allows mobiles to operate in adhoc mode. Although under the (partial) control of the network, D2D communications open the door to many other use-cases. This dissertation studies different aspects of D2D communications and its impact on the key performance indicators of the network. We design an architecture for the collaboration of cellular users by means of timely exploited D2D opportunities. We begin by presenting the analytical study on opportunistic outband D2D communications. The study reveals the great potential of opportunistic outband D2D communications for enhancing energy efficiency, fairness, and capacity of cellular networks when groups of D2D users can be form and managed in the cellular network. Then we introduce a protocol that is compatible with the latest release of IEEE and 3GPP standards and allows for implementation of our proposal in a today’s cellular network. To validate our analytical findings, we use our experimental Software Defined Radio (SDR)-based testbed to further study our proposal in a real world scenario. The experimental results confirm the outstanding potential of opportunistic outband D2D communications. Finally, we investigate the performance merits and disadvantages of different D2D “modes”. Our investigation reveals, despite the common belief, that all D2D modes are complementary and their merits are scenario based.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Douglas Leith.- Secretario: Albert Banchs Roca.- Vocal: Carla Fabiana Chiasserin

    Energy efficiency and interference management in long term evolution-advanced networks.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.Cellular networks are continuously undergoing fast extraordinary evolution to overcome technological challenges. The fourth generation (4G) or Long Term Evolution-Advanced (LTE-Advanced) networks offer improvements in performance through increase in network density, while allowing self-organisation and self-healing. The LTE-Advanced architecture is heterogeneous, consisting of different radio access technologies (RATs), such as macrocell, smallcells, cooperative relay nodes (RNs), having various capabilities, and coexisting in the same geographical coverage area. These network improvements come with different challenges that affect users’ quality of service (QoS) and network performance. These challenges include; interference management, high energy consumption and poor coverage of marginal users. Hence, developing mitigation schemes for these identified challenges is the focus of this thesis. The exponential growth of mobile broadband data usage and poor networks’ performance along the cell edges, result in a large increase of the energy consumption for both base stations (BSs) and users. This due to improper RN placement or deployment that creates severe inter-cell and intracell interferences in the networks. It is therefore, necessary to investigate appropriate RN placement techniques which offer efficient coverage extension while reducing energy consumption and mitigating interference in LTE-Advanced femtocell networks. This work proposes energy efficient and optimal RN placement (EEORNP) algorithm based on greedy algorithm to assure improved and effective coverage extension. The performance of the proposed algorithm is investigated in terms of coverage percentage and number of RN needed to cover marginalised users and found to outperform other RN placement schemes. Transceiver design has gained importance as one of the effective tools of interference management. Centralised transceiver design techniques have been used to improve network performance for LTE-Advanced networks in terms of mean square error (MSE), bit error rate (BER) and sum-rate. The centralised transceiver design techniques are not effective and computationally feasible for distributed cooperative heterogeneous networks, the systems considered in this thesis. This work proposes decentralised transceivers design based on the least-square (LS) and minimum MSE (MMSE) pilot-aided channel estimations for interference management in uplink LTE-Advanced femtocell networks. The decentralised transceiver algorithms are designed for the femtocells, the macrocell user equipments (MUEs), RNs and the cell edge macrocell UEs (CUEs) in the half-duplex cooperative relaying systems. The BER performances of the proposed algorithms with the effect of channel estimation are investigated. Finally, the EE optimisation is investigated in half-duplex multi-user multiple-input multiple-output (MU-MIMO) relay systems. The EE optimisation is divided into sub-optimal EE problems due to the distributed architecture of the MU-MIMO relay systems. The decentralised approach is employed to design the transceivers such as MUEs, CUEs, RN and femtocells for the different sub-optimal EE problems. The EE objective functions are formulated as convex optimisation problems subject to the QoS and transmit powers constraints in case of perfect channel state information (CSI). The non-convexity of the formulated EE optimisation problems is surmounted by introducing the EE parameter substractive function into each proposed algorithms. These EE parameters are updated using the Dinkelbach’s algorithm. The EE optimisation of the proposed algorithms is achieved after finding the optimal transceivers where the unknown interference terms in the transmit signals are designed with the zero-forcing (ZF) assumption and estimation errors are added to improve the EE performances. With the aid of simulation results, the performance of the proposed decentralised schemes are derived in terms of average EE evaluation and found to be better than existing algorithms

    A comprehensive survey on radio resource management in 5G HetNets: current solutions, future trends and open issues

    Get PDF
    The 5G network technologies are intended to accommodate innovative services with a large influx of data traffic with lower energy consumption and increased quality of service and user quality of experience levels. In order to meet 5G expectations, heterogeneous networks (HetNets) have been introduced. They involve deployment of additional low power nodes within the coverage area of conventional high power nodes and their placement closer to user underlay HetNets. Due to the increased density of small-cell networks and radio access technologies, radio resource management (RRM) for potential 5G HetNets has emerged as a critical avenue. It plays a pivotal role in enhancing spectrum utilization, load balancing, and network energy efficiency. In this paper, we summarize the key challenges i.e., cross-tier interference, co-tier interference, and user association-resource-power allocation (UA-RA-PA) emerging in 5G HetNets and highlight their significance. In addition, we present a comprehensive survey of RRM schemes based on interference management (IM), UA-RA-PA and combined approaches (UA-RA-PA + IM). We introduce a taxonomy for individual (IM, UA-RA-PA) and combined approaches as a framework for systematically studying the existing schemes. These schemes are also qualitatively analyzed and compared to each other. Finally, challenges and opportunities for RRM in 5G are outlined, and design guidelines along with possible solutions for advanced mechanisms are presented

    A Study about Heterogeneous Network Issues Management based on Enhanced Inter-cell Interference Coordination and Machine Learning Algorithms

    Get PDF
    Under the circumstance of fast growing demands for mobile data, Heterogeneous Networks (HetNets) has been considered as one of the key technologies to solve 1000 times mobile data challenge in the coming decade. Although the unique multi-tier topology of HetNets has achieved high spectrum efficiency and enhanced Quality of Service (QoS), it also brings a series of critical issues. In this thesis, we present an investigation on understanding the cause of HetNets challenges and provide a research on state of arts techniques to solve three major issues: interference, offloading and handover. The first issue addressed in the thesis is the cross-tier interference of HetNets. We introduce Almost Blank Subframes (ABS) to free small cell UEs from cross-tier interference, which is the key technique of enhanced Inter-Cell Interference Coordination (eICIC). Nash Bargain Solution (NBS) is applied to optimize ABS ratio and UE partition. Furthermore, we propose a power based multi-layer NBS Algorithm to obtain optimal parameters of Further enhanced Inter-cell Interference Coordination (FeICIC), which significantly improve macrocell efficiency compared to eICIC. This algorithm not only introduces dynamic power ratio but also defined opportunity cost for each layer instead of conventional zero-cost partial fairness. Simulation results show the performance of proposed algorithm may achieve up to 31.4% user throughput gain compared to eICIC and fixed power ratio FeICIC. This thesis’ second focusing issue is offloading problem of HetNets. This includes (1) UE offloading from macro cell and (2) small cell backhaul offloading. For first aspect, we have discussed the capability of machine learning algorithms tackling this challenge and propose the User-Based K-means Algorithm (UBKCA). The proposed algorithm establishes a closed loop Self-Organization system on our HetNets scenario to maintain desired offloading factor of 50%, with cell edge user factor 17.5% and CRE bias of 8dB. For second part, we further apply machine learning clustering method to establish cache system, which may achieve up to 70.27% hit-ratio and reduce request latency by 60.21% for Youtube scenario. K-Nearest Neighbouring (KNN) is then applied to predict new users’ content preference and prove our cache system’s suitability. Besides that, we have also proposed a system to predict users’ content preference even if the collected data is not complete. The third part focuses on offloading phase within HetNets. This part detailed discusses CRE’s positive effect on mitigating ping-pong handover during UE offloading, and CRE’s negative effect on increasing cross-tier interference. And then a modified Markov Chain Process is established to map the handover phases for UE to offload from macro cell to small cell and vice versa. The transition probability of MCP has considered both effects of CRE so that the optimal CRE value for HetNets can be achieved, and result for our scenario is 7dB. The combination of CRE and Handover Margin is also discussed
    corecore