874 research outputs found

    A Multi-Agent Neural Network for Dynamic Frequency Reuse in LTE Networks

    Full text link
    Fractional Frequency Reuse techniques can be employed to address interference in mobile networks, improving throughput for edge users. There is a tradeoff between the coverage and overall throughput achievable, as interference avoidance techniques lead to a loss in a cell's overall throughput, with spectrum efficiency decreasing with the fencing off of orthogonal resources. In this paper we propose MANN, a dynamic multiagent frequency reuse scheme, where individual agents in charge of cells control their configurations based on input from neural networks. The agents' decisions are partially influenced by a coordinator agent, which attempts to maximise a global metric of the network (e.g., cell-edge performance). Each agent uses a neural network to estimate the best action (i.e., cell configuration) for its current environment setup, and attempts to maximise in turn a local metric, subject to the constraint imposed by the coordinator agent. Results show that our solution provides improved performance for edge users, increasing the throughput of the bottom 5% of users by 22%, while retaining 95% of a network's overall throughput from the full frequency reuse case. Furthermore, we show how our method improves on static fractional frequency reuse schemes

    A survey of self organisation in future cellular networks

    Get PDF
    This article surveys the literature over the period of the last decade on the emerging field of self organisation as applied to wireless cellular communication networks. Self organisation has been extensively studied and applied in adhoc networks, wireless sensor networks and autonomic computer networks; however in the context of wireless cellular networks, this is the first attempt to put in perspective the various efforts in form of a tutorial/survey. We provide a comprehensive survey of the existing literature, projects and standards in self organising cellular networks. Additionally, we also aim to present a clear understanding of this active research area, identifying a clear taxonomy and guidelines for design of self organising mechanisms. We compare strength and weakness of existing solutions and highlight the key research areas for further development. This paper serves as a guide and a starting point for anyone willing to delve into research on self organisation in wireless cellular communication networks

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Applications of Soft Computing in Mobile and Wireless Communications

    Get PDF
    Soft computing is a synergistic combination of artificial intelligence methodologies to model and solve real world problems that are either impossible or too difficult to model mathematically. Furthermore, the use of conventional modeling techniques demands rigor, precision and certainty, which carry computational cost. On the other hand, soft computing utilizes computation, reasoning and inference to reduce computational cost by exploiting tolerance for imprecision, uncertainty, partial truth and approximation. In addition to computational cost savings, soft computing is an excellent platform for autonomic computing, owing to its roots in artificial intelligence. Wireless communication networks are associated with much uncertainty and imprecision due to a number of stochastic processes such as escalating number of access points, constantly changing propagation channels, sudden variations in network load and random mobility of users. This reality has fuelled numerous applications of soft computing techniques in mobile and wireless communications. This paper reviews various applications of the core soft computing methodologies in mobile and wireless communications

    Dynamic spectrum allocation following machine learning-based traffic predictions in 5G

    Get PDF
    © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The popularity of mobile broadband connectivity continues to grow and thus, the future wireless networks are expected to serve a very large number of users demanding a huge capacity. Employing larger spectral bandwidth and installing more access points to enhance the capacity is not enough to tackle the stated challenge due to related costs and the interference issues involved. In this way, frequency resources are becoming one of the most valuable assets, which require proper utilization and fair distribution. Traditional frequency resource management strategies are often based on static approaches, and are agnostic to the instantaneous demand of the network. These static approaches tend to cause congestion in a few cells, whereas at the same time, might waste those precious resources on others. Therefore, such static approaches are not efficient enough to deal with the capacity challenge of the future network. Thus, in this paper we present a dynamic access-aware bandwidth allocation approach, which follows the dynamic traffic requirements of each cell and allocates the required bandwidth accordingly from a common spectrum pool, which gathers the entire system bandwidth. We perform the evaluation of our proposal by means of real network traffic traces. Evaluation results presented in this paper depict the performance gain of the proposed dynamic access-aware approach compared to two different traditional approaches in terms of utilization and served traffic. Moreover, to acquire knowledge about access network requirement, we present a machine learning-based approach, which predicts the state of the network, and is utilized to manage the available spectrum accordingly. Our comparative results show that, in terms of spectrum allocation accuracy and utilization efficiency, a well designed machine learning-based bandwidth allocation mechanism not only outperforms common static approaches, but even achieves the performance (with a relative error close to 0.04) of an ideal dynamic system with perfect knowledge of future traffic requirements.This work was supported in part by the EU Horizon 2020 Research and Innovation Program (5GAuRA) under Grant 675806, and in part by the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement from the Generalitat de Catalunya under Grant 2017 SGR 376.Peer ReviewedPostprint (published version

    Algoritmos de aprendizado de máquina para coordenação de interferência entre células

    Get PDF
    The current LTE and LTE-A deployments require larger efforts to achieve the radio resource management. This, due to the increase of users and the constantly growing demand of services. For this reason, the automatic optimization is a key point to avoid issues such as the inter-cell interference. This paper presents several proposals of machine-learning algorithms focused on this automatic optimization problem. The research works seek that the cellular systems achieve their self-optimization, a key concept within the self-organized networks, where the main objective is to achieve that the networks to be capable to automatically respond to the particular needs in the dynamic network traffic scenarios.Los despliegues actuales de LTE y LTE-A requieren mayor esfuerzo para la gestiĂłn de recursos radio debido al incremento de usuarios y a la gran demanda de servicios; en ese escenario, la optimizaciĂłn automática es un punto clave para evitar problemas como la interferencia inter-celda. El presente trabajo recopila propuestas de algoritmos de aprendizaje automático [machine learning] enfocados en resolver este problema. Las investigaciones buscan que los sistemas celulares consigan su auto-optimizaciĂłn, un concepto que se enmarca dentro del área de redes auto-organizadas [Self-Organized Networks, SON], cuyo objetivo es lograr que las redes respondan de forma automática a las necesidades de los escenarios dinámicos de tráfico de red.As implantações atuais de LTE e LTE-A exigem maior esforço para o gerenciamento de recursos rádio devido ao aumento de usuários e Ă  alta demanda por serviços, neste cenário a otimização automática Ă© um ponto-chave para evitar problemas como a interferĂŞncia entre cĂ©lulas. O presente trabalho coleta propostas de algoritmos de aprendizado automáticos focados na resolução deste problema. A pesquisa busca que os sistemas celulares alcancem a sua auto-otimização, um conceito que faz parte das redes auto-organizadas (Self-Organizing Networks, SON), cujo objetivo Ă© garantir que as redes respondam automaticamente Ă s necessidades dos cenários dinâmicos do tráfego de rede

    Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms

    Get PDF
    The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications

    Reinforcement Learning Scheduler for Vehicle-to-Vehicle Communications Outside Coverage

    Full text link
    Radio resources in vehicle-to-vehicle (V2V) communication can be scheduled either by a centralized scheduler residing in the network (e.g., a base station in case of cellular systems) or a distributed scheduler, where the resources are autonomously selected by the vehicles. The former approach yields a considerably higher resource utilization in case the network coverage is uninterrupted. However, in case of intermittent or out-of-coverage, due to not having input from centralized scheduler, vehicles need to revert to distributed scheduling. Motivated by recent advances in reinforcement learning (RL), we investigate whether a centralized learning scheduler can be taught to efficiently pre-assign the resources to vehicles for out-of-coverage V2V communication. Specifically, we use the actor-critic RL algorithm to train the centralized scheduler to provide non-interfering resources to vehicles before they enter the out-of-coverage area. Our initial results show that a RL-based scheduler can achieve performance as good as or better than the state-of-art distributed scheduler, often outperforming it. Furthermore, the learning process completes within a reasonable time (ranging from a few hundred to a few thousand epochs), thus making the RL-based scheduler a promising solution for V2V communications with intermittent network coverage.Comment: Article published in IEEE VNC 201
    • …
    corecore