34 research outputs found

    User relay assisted traffic shifting in LTE-advanced systems

    No full text
    In order to deal with uneven load distribution, mobility load balancing adjusts the handover region to shift edge users from a hot-spot cell to the less-loaded neighbouring cells. However, shifted users suffer the reduced signal power from neighbouring cells, which may result in link quality degradation. This paper employs a user relaying model and proposes a user relay assisted traffic shifting (URTS) scheme to deal with the above problem. In URTS, a shifted user selects a suitable non-active user as relay user to forward data, thus enhancing the link quality of the shifted user. Since the user relaying model consumes relay user’s energy, a utility function is designed in relay selection to reach a trade-off between the shifted user’s link quality improvement and the relay user’s energy consumption. Simulation results show that URTS scheme could improve SINR and throughput of shifted users. Also, URTS scheme keeps the cost of relay user’s energy consumption at an acceptable level

    Self-Organising Load Balancing for OFDMA Cellular Networks

    Get PDF
    In this thesis, self-organising load balancing is investigated to deal with the uneven load distribution in OFDMA based cellular networks. In single-hop cellular networks, a self- organising cluster-based cooperative load balancing (CCLB) scheme is proposed to overcome the ‘virtual partner’ and the ‘aggravating load’ problems confronted in the conventional mobility load balancing schemes. Theoretical analysis and simulation results show that the proposed scheme can effectively reduce the call blocking probability, the handover failure rate, and the hot-spot cell’s load. The proposed CCLB scheme consists of two stages: partner cell selection and traffic shifting. In the partner cell selection stage, a user-vote assisted clustering algorithm is proposed, which jointly considers the users’ channel condition and the surrounding cells’ load. This algorithm can select appropriate neighbouring cells as partners to construct the load balancing cluster, and deal with the ‘virtual partner’ problem. In the traffic shifting stage, a relative load response model (RLRM) is designed. RLRM coordinates multiple hot-spot cells’ shifting traffic towards their public partner, thus mitigating the ‘aggravating load’ problem of the public partner. Moreover, a traffic offloading optimisation algorithm is proposed to balance the hot-spot cell’s load within the load balancing cluster and to minimise its partners’ average call blocking probability. The CCLB scheme is modified to apply in multi-hop cellular networks with relays deployed. Both fixed relay and mobile user relay scenarios are considered. For fixed relay cellular networks, a relay-level user shifting algorithm is proposed. This algorithm jointly considers users’ channel condition and spectrum usage of fixed relay, in order to reduce the handover failure rate and deal with the ‘aggravating load’ problem of fixed relay. In the mobile user relay scenario, the user relaying assisted traffic shifting algorithm is proposed to improve the link quality of shifted edge users, which brings about an increase in the achievable rate of shifted edge users and decrease in the handover failure rate

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Resource Allocation for Next Generation Radio Access Networks

    Get PDF
    Driven by data hungry applications, the architecture of mobile networks is moving towards that of densely deployed cells where each cell may use a different access technology as well as a different frequency band. Next generation networks (NGNs) are essentially identified by their dramatically increased data rates and their sustainable deployment. Motivated by these requirements, in this thesis we focus on (i) capacity maximisation, (ii) energy efficient configuration of different classes of radio access networks (RANs). To fairly allocate the available resources, we consider proportional fair rate allocations. We first consider capacity maximisation in co-channel 4G (LTE) networks, then we proceed to capacity maximisation in mixed LTE (including licensed LTE small cells) and 802.11 (WiFi) networks. And finally we study energy efficient capacity maximisation of dense 3G/4G co-channel small cell networks. In each chapter we provide a network model and a scalable resource allocation approach which may be implemented in a centralised or distributed manner depending on the objective and network constraints

    A Study about Heterogeneous Network Issues Management based on Enhanced Inter-cell Interference Coordination and Machine Learning Algorithms

    Get PDF
    Under the circumstance of fast growing demands for mobile data, Heterogeneous Networks (HetNets) has been considered as one of the key technologies to solve 1000 times mobile data challenge in the coming decade. Although the unique multi-tier topology of HetNets has achieved high spectrum efficiency and enhanced Quality of Service (QoS), it also brings a series of critical issues. In this thesis, we present an investigation on understanding the cause of HetNets challenges and provide a research on state of arts techniques to solve three major issues: interference, offloading and handover. The first issue addressed in the thesis is the cross-tier interference of HetNets. We introduce Almost Blank Subframes (ABS) to free small cell UEs from cross-tier interference, which is the key technique of enhanced Inter-Cell Interference Coordination (eICIC). Nash Bargain Solution (NBS) is applied to optimize ABS ratio and UE partition. Furthermore, we propose a power based multi-layer NBS Algorithm to obtain optimal parameters of Further enhanced Inter-cell Interference Coordination (FeICIC), which significantly improve macrocell efficiency compared to eICIC. This algorithm not only introduces dynamic power ratio but also defined opportunity cost for each layer instead of conventional zero-cost partial fairness. Simulation results show the performance of proposed algorithm may achieve up to 31.4% user throughput gain compared to eICIC and fixed power ratio FeICIC. This thesis’ second focusing issue is offloading problem of HetNets. This includes (1) UE offloading from macro cell and (2) small cell backhaul offloading. For first aspect, we have discussed the capability of machine learning algorithms tackling this challenge and propose the User-Based K-means Algorithm (UBKCA). The proposed algorithm establishes a closed loop Self-Organization system on our HetNets scenario to maintain desired offloading factor of 50%, with cell edge user factor 17.5% and CRE bias of 8dB. For second part, we further apply machine learning clustering method to establish cache system, which may achieve up to 70.27% hit-ratio and reduce request latency by 60.21% for Youtube scenario. K-Nearest Neighbouring (KNN) is then applied to predict new users’ content preference and prove our cache system’s suitability. Besides that, we have also proposed a system to predict users’ content preference even if the collected data is not complete. The third part focuses on offloading phase within HetNets. This part detailed discusses CRE’s positive effect on mitigating ping-pong handover during UE offloading, and CRE’s negative effect on increasing cross-tier interference. And then a modified Markov Chain Process is established to map the handover phases for UE to offload from macro cell to small cell and vice versa. The transition probability of MCP has considered both effects of CRE so that the optimal CRE value for HetNets can be achieved, and result for our scenario is 7dB. The combination of CRE and Handover Margin is also discussed

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Next-Generation Self-Organizing Networks through a Machine Learning Approach

    Get PDF
    Fecha de lectura de Tesis Doctoral: 17 Diciembre 2018.Para reducir los costes de gestión de las redes celulares, que, con el tiempo, aumentaban en complejidad, surgió el concepto de las redes autoorganizadas, o self-organizing networks (SON). Es decir, la automatización de las tareas de gestión de una red celular para disminuir los costes de infraestructura (CAPEX) y de operación (OPEX). Las tareas de las SON se dividen en tres categorías: autoconfiguración, autooptimización y autocuración. El objetivo de esta tesis es la mejora de las funciones SON a través del desarrollo y uso de herramientas de aprendizaje automático (machine learning, ML) para la gestión de la red. Por un lado, se aborda la autocuración a través de la propuesta de una novedosa herramienta para una diagnosis automática (RCA), consistente en la combinación de múltiples sistemas RCA independientes para el desarrollo de un sistema compuesto de RCA mejorado. A su vez, para aumentar la precisión de las herramientas de RCA mientras se reducen tanto el CAPEX como el OPEX, en esta tesis se proponen y evalúan herramientas de ML de reducción de dimensionalidad en combinación con herramientas de RCA. Por otro lado, en esta tesis se estudian las funcionalidades multienlace dentro de la autooptimización y se proponen técnicas para su gestión automática. En el campo de las comunicaciones mejoradas de banda ancha, se propone una herramienta para la gestión de portadoras radio, que permite la implementación de políticas del operador, mientras que, en el campo de las comunicaciones vehiculares de baja latencia, se propone un mecanismo multicamino para la redirección del tráfico a través de múltiples interfaces radio. Muchos de los métodos propuestos en esta tesis se han evaluado usando datos provenientes de redes celulares reales, lo que ha permitido demostrar su validez en entornos realistas, así como su capacidad para ser desplegados en redes móviles actuales y futuras

    Self-organization for 5G and beyond mobile networks using reinforcement learning

    Get PDF
    The next generations of mobile networks 5G and beyond, must overcome current networks limitations as well as improve network performance. Some of the requirements envisioned for future mobile networks are: addressing the massive growth required in coverage, capacity and traffic; providing better quality of service and experience to end users; supporting ultra high data rates and reliability; ensuring latency as low as one millisecond, among others. Thus, in order for future networks to enable all of these stringent requirements, a promising concept has emerged, self organising networks (SONs). SONs consist of making mobile networks more adaptive and autonomous and are divided in three main branches, depending on their use-cases, namely: self-configuration, self-optimisation, and self-healing. SON is a very promising and broad concept, and in order to enable it, more intelligence needs to be embedded in the mobile network. As such, one possible solution is the utilisation of machine learning (ML) algorithms. ML has many branches, such as supervised, unsupervised and Reinforcement Learning (RL), and all can be used in different SON use-cases. The objectives of this thesis are to explore different RL techniques in the context of SONs, more specifically in self-optimization use-cases. First, the use-case of user-cell association in future heterogeneous networks is analysed and optimised. This scenario considers not only Radio Access Network (RAN) constraints, but also in terms of the backhaul. Based on this, a distributed solution utilizing RL is proposed and compared with other state-of-the-art methods. Results show that the proposed RL algorithm outperforms current ones and is able to achieve better user satisfaction, while minimizing the number of users in outage. Another objective of this thesis is the evaluation of Unmanned Aerial vehicles (UAVs) to optimize cellular networks. It is envisioned that UAVs can be utilized in different SON use-cases and integrated with RL algorithms to determine their optimal 3D positions in space according to network constraints. As such, two different mobile network scenarios are analysed, one emergency and a pop-up network. The emergency scenario considers that a major natural disaster destroyed most of the ground network infrastructure and the goal is to provide coverage to the highest number of users possible using UAVs as access points. The second scenario simulates an event happening in a city and, because of the ground network congestion, network capacity needs to be enhanced by the deployment of aerial base stations. For both scenarios different types of RL algorithms are considered and their complexity and convergence are analysed. In both cases it is shown that UAVs coupled with RL are capable of solving network issues in an efficient and quick manner. Thus, due to its ability to learn from interaction with an environment and from previous experience, without knowing the dynamics of the environment, or relying on previously collected data, RL is considered as a promising solution to enable SON
    corecore