47 research outputs found

    A Survey of Deep Learning for Data Caching in Edge Network

    Full text link
    The concept of edge caching provision in emerging 5G and beyond mobile networks is a promising method to deal both with the traffic congestion problem in the core network as well as reducing latency to access popular content. In that respect end user demand for popular content can be satisfied by proactively caching it at the network edge, i.e, at close proximity to the users. In addition to model based caching schemes learning-based edge caching optimizations has recently attracted significant attention and the aim hereafter is to capture these recent advances for both model based and data driven techniques in the area of proactive caching. This paper summarizes the utilization of deep learning for data caching in edge network. We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure. Then, a number of key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning as well as reinforcement learning. Furthermore, a comparison of state-of-the-art literature is provided from the aspects of caching topics and deep learning methods. Finally, we discuss research challenges and future directions of applying deep learning for cachin

    A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems

    Get PDF
    Producción CientíficaIn recent years, the number of embedded computing devices connected to the Internet has exponentially increased. At the same time, new applications are becoming more complex and computationally demanding, which can be a problem for devices, especially when they are battery powered. In this context, the concepts of computation offloading and edge computing, which allow applications to be fully or partially offloaded and executed on servers close to the devices in the network, have arisen and received increasing attention. Then, the design of algorithms to make the decision of which applications or tasks should be offloaded, and where to execute them, is crucial. One of the options that has been gaining momentum lately is the use of Reinforcement Learning (RL) and, in particular, Deep Reinforcement Learning (DRL), which enables learning optimal or near-optimal offloading policies adapted to each particular scenario. Although the use of RL techniques to solve the computation offloading problem in edge systems has been covered by some surveys, it has been done in a limited way. For example, some surveys have analysed the use of RL to solve various networking problems, with computation offloading being one of them, but not the primary focus. Other surveys, on the other hand, have reviewed techniques to solve the computation offloading problem, being RL just one of the approaches considered. To the best of our knowledge, this is the first survey that specifically focuses on the use of RL and DRL techniques for computation offloading in edge computing system. We present a comprehensive and detailed survey, where we analyse and classify the research papers in terms of use cases, network and edge computing architectures, objectives, RL algorithms, decision-making approaches, and time-varying characteristics considered in the analysed scenarios. In particular, we include a series of tables to help researchers identify relevant papers based on specific features, and analyse which scenarios and techniques are most frequently considered in the literature. Finally, this survey identifies a number of research challenges, future directions and areas for further study.Consejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)Ministerio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42, PID2021-124463OBI00 y RED2018-102585-T, financiados por MCIN/AEI/10.13039/501100011033

    Technologies for urban and rural internet of things

    Get PDF
    Nowadays, application domains such as smart cities, agriculture or intelligent transportation, require communication technologies that combine long transmission ranges and energy efficiency to fulfill a set of capabilities and constraints to rely on. In addition, in recent years, the interest in Unmanned Aerial Vehicles (UAVs) providing wireless connectivity in such scenarios is substantially increased thanks to their flexible deployment. The first chapters of this thesis deal with LoRaWAN and Narrowband-IoT (NB-IoT), which recent trends identify as the most promising Low Power Wide Area Networks technologies. While LoRaWAN is an open protocol that has gained a lot of interest thanks to its simplicity and energy efficiency, NB-IoT has been introduced from 3GPP as a radio access technology for massive machine-type communications inheriting legacy LTE characteristics. This thesis offers an overview of the two, comparing them in terms of selected performance indicators. In particular, LoRaWAN technology is assessed both via simulations and experiments, considering different network architectures and solutions to improve its performance (e.g., a new Adaptive Data Rate algorithm). NB-IoT is then introduced to identify which technology is more suitable depending on the application considered. The second part of the thesis introduces the use of UAVs as flying Base Stations, denoted as Unmanned Aerial Base Stations, (UABSs), which are considered as one of the key pillars of 6G to offer service for a number of applications. To this end, the performance of an NB-IoT network are assessed considering a UABS following predefined trajectories. Then, machine learning algorithms based on reinforcement learning and meta-learning are considered to optimize the trajectory as well as the radio resource management techniques the UABS may rely on in order to provide service considering both static (IoT sensors) and dynamic (vehicles) users. Finally, some experimental projects based on the technologies mentioned so far are presented

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Self-Evolving Integrated Vertical Heterogeneous Networks

    Full text link
    6G and beyond networks tend towards fully intelligent and adaptive design in order to provide better operational agility in maintaining universal wireless access and supporting a wide range of services and use cases while dealing with network complexity efficiently. Such enhanced network agility will require developing a self-evolving capability in designing both the network architecture and resource management to intelligently utilize resources, reduce operational costs, and achieve the coveted quality of service (QoS). To enable this capability, the necessity of considering an integrated vertical heterogeneous network (VHetNet) architecture appears to be inevitable due to its high inherent agility. Moreover, employing an intelligent framework is another crucial requirement for self-evolving networks to deal with real-time network optimization problems. Hence, in this work, to provide a better insight on network architecture design in support of self-evolving networks, we highlight the merits of integrated VHetNet architecture while proposing an intelligent framework for self-evolving integrated vertical heterogeneous networks (SEI-VHetNets). The impact of the challenges associated with SEI-VHetNet architecture, on network management is also studied considering a generalized network model. Furthermore, the current literature on network management of integrated VHetNets along with the recent advancements in artificial intelligence (AI)/machine learning (ML) solutions are discussed. Accordingly, the core challenges of integrating AI/ML in SEI-VHetNets are identified. Finally, the potential future research directions for advancing the autonomous and self-evolving capabilities of SEI-VHetNets are discussed.Comment: 25 pages, 5 figures, 2 table
    corecore