45 research outputs found

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    Energy-Efficient Resource Allocation in Cloud and Fog Radio Access Networks

    Get PDF
    PhD ThesisWith the development of cloud computing, radio access networks (RAN) is migrating to fully or partially centralised architecture, such as Cloud RAN (C- RAN) or Fog RAN (F-RAN). The novel architectures are able to support new applications with the higher throughput, the higher energy e ciency and the better spectral e ciency performance. However, the more complex energy consumption features brought by these new architectures are challenging. In addition, the usage of Energy Harvesting (EH) technology and the computation o oading in novel architectures requires novel resource allocation designs.This thesis focuses on the energy e cient resource allocation for Cloud and Fog RAN networks. Firstly, a joint user association (UA) and power allocation scheme is proposed for the Heterogeneous Cloud Radio Access Networks with hybrid energy sources where Energy Harvesting technology is utilised. The optimisation problem is designed to maximise the utilisation of the renewable energy source. Through solving the proposed optimisation problem, the user association and power allocation policies are derived together to minimise the grid power consumption. Compared to the conventional UAs adopted in RANs, green power harvested by renewable energy source can be better utilised so that the grid power consumption can be greatly reduced with the proposed scheme. Secondly, a delay-aware energy e cient computation o oading scheme is proposed for the EH enabled F-RANs, where for access points (F-APs) are supported by renewable energy sources. The uneven distribution of the harvested energy brings in dynamics of the o oading design and a ects the delay experienced by users. The grid power minimisation problem is formulated. Based on the solutions derived, an energy e cient o oading decision algorithm is designed. Compared to SINR-based o oading scheme, the total grid power consumption of all F-APs can be reduced signi cantly with the proposed o oading decision algorithm while meeting the latency constraint. Thirdly, an energy-e cient computation o oading for mobile applications with shared data is investigated in a multi-user fog computing network. Taking the advantage of shared data property of latency-critical applications such as virtual reality (VR) and augmented reality (AR) into consideration, the energy minimisation problem is formulated. Then the optimal computation o oading and communications resources allocation policy is proposed which is able to minimise the overall energy consumption of mobile users and cloudlet server. Performance analysis indicates that the proposed policy outperforms other o oading schemes in terms of energy e ciency. The research works conducted in this thesis and the thorough performance analysis have revealed some insights on energy e cient resource allocation design in Cloud and Fog RANs

    Traffic control for energy harvesting virtual small cells via reinforcement learning

    Get PDF
    Due to the rapid growth of mobile data traffic, future mobile networks are expected to support at least 1000 times more capacity than 4G systems. This trend leads to an increasing energy demand from mobile networks which raises both economic and environmental concerns. Energy costs are becoming an important part of OPEX by Mobile Network Operators (MNOs). As a result, the shift towards energy-oriented design and operation of 5G and beyond systems has been emphasized by academia, industries as well as standard bodies. In particular, Radio Access Network (RAN) is the major energy consuming part of cellular networks. To increase the RAN efficiency, Cloud Radio Access Network (CRAN) has been proposed to enable centralized cloud processing of baseband functions while Base Stations (BSs) are reduced to simple Radio Remote Heads (RRHs). The connection between the RRHs and central cloud is provided by high capacity and very low latency fronthaul. Flexible functional splits between local BS sites and a central cloud are then proposed to relax the CRAN fronthaul requirements via partial processing of baseband functions at the local BS sites. Moreover, Network Function Virtualization (NFV) and Software Defined Networking (SDN) enable flexibility in placement and control of network functions. Relying on SDN/NFV with flexible functional splits, network functions of small BSs can be virtualized and placed at different sites of the network. These small BSs are known as virtual Small Cells (vSCs). More recently, Multi-access Edge Computing (MEC) has been introduced where BSs can leverage cloud computing capabilities and offer computational resources on demand basis. On the other hand, Energy Harvesting (EH) is a promising technology ensuring both cost effectiveness and carbon footprint reduction. However, EH comes with challenges mainly due to intermittent and unreliable energy sources. In EH Base Stations (EHBSs), it is important to intelligently manage the harvested energy as well as to ensure energy storage provision. Consequently, MEC enabled EHBSs can open a new frontier in energy-aware processing and sharing of processing units according to flexible functional split options. The goal of this PhD thesis is to propose energy-aware control algorithms in EH powered vSCs for efficient utilization of harvested energy and lowering the grid energy consumption of RAN, which is the most power consuming part of the network. We leverage on virtualization and MEC technologies for dynamic provision of computational resources according to functional split options employed by the vSCs. After describing the state-of-the-art, the first part of the thesis focuses on offline optimization for efficient harvested energy utilization via dynamic functional split control in vSCs powered by EH. For this purpose, dynamic programming is applied to determine the performance bound and comparison is drawn against static configurations. The second part of the thesis focuses on online control methods where reinforcement learning based controllers are designed and evaluated. In particular, more focus is given towards the design of multi-agent reinforcement learning to overcome the limitations of centralized approaches due to complexity and scalability. Both tabular and deep reinforcement learning algorithms are tailored in a distributed architecture with emphasis on enabling coordination among the agents. Policy comparison among the online controllers and against the offline bound as well as energy and cost saving benefits are also analyzed.Debido al rápido crecimiento del tráfico de datos móviles, se espera que las redes móviles futuras admitan al menos 1000 veces más capacidad que los sistemas 4G. Esta tendencia lleva a una creciente demanda de energía de las redes móviles, lo que plantea preocupaciones económicas y ambientales. Los costos de energía se están convirtiendo en una parte importante de OPEX por parte de los operadores de redes móviles (MNO). Como resultado, la academia, las industrias y los organismos estándar han enfatizado el cambio hacia el diseño orientado a la energía y la operación de sistemas 5G y más allá de los sistemas. En particular, la red de acceso por radio (RAN) es la principal parte de las redes celulares que consume energía. Para aumentar la eficiencia de la RAN, se ha propuesto Cloud Radio Access Network (CRAN) para permitir el procesamiento centralizado en la nube de las funciones de banda base, mientras que las estaciones base (BS) se reducen a simples cabezales remotos de radio (RRH). La conexión entre los RRHs y la nube central es proporcionada por una capacidad frontal de muy alta latencia y muy baja latencia. Luego se proponen divisiones funcionales flexibles entre los sitios de BS locales y una nube central para relajar los requisitos de red de enlace CRAN a través del procesamiento parcial de las funciones de banda base en los sitios de BS locales. Además, la virtualización de funciones de red (NFV) y las redes definidas por software (SDN) permiten flexibilidad en la colocación y el control de las funciones de red. Confiando en SDN / NFV con divisiones funcionales flexibles, las funciones de red de pequeñas BS pueden virtualizarse y ubicarse en diferentes sitios de la red. Estas pequeñas BS se conocen como pequeñas celdas virtuales (vSC). Más recientemente, se introdujo la computación perimetral de acceso múltiple (MEC) donde los BS pueden aprovechar las capacidades de computación en la nube y ofrecer recursos computacionales según la demanda. Por otro lado, Energy Harvesting (EH) es una tecnología prometedora que garantiza tanto la rentabilidad como la reducción de la huella de carbono. Sin embargo, EH presenta desafíos principalmente debido a fuentes de energía intermitentes y poco confiables. En las estaciones base EH (EHBS), es importante administrar de manera inteligente la energía cosechada, así como garantizar el suministro de almacenamiento de energía. En consecuencia, los EHBS habilitados para MEC pueden abrir una nueva frontera en el procesamiento con conciencia energética y el intercambio de unidades de procesamiento de acuerdo con las opciones de división funcional flexible. El objetivo de esta tesis doctoral es proponer algoritmos de control conscientes de la energía en vSC alimentados por EH para la utilización eficiente de la energía cosechada y reducir el consumo de energía de la red de RAN, que es la parte más consumidora de la red. Aprovechamos las tecnologías de virtualización y MEC para la provisión dinámica de recursos computacionales de acuerdo con las opciones de división funcional empleadas por los vSC. La primera parte de la tesis se centra en la optimización fuera de línea para la utilización eficiente de la energía cosechada a través del control dinámico de división funcional en vSC con tecnología EH. Para este propósito, la programación dinámica se aplica para determinar el rendimiento limitado y la comparación se realiza con configuraciones estáticas. La segunda parte de la tesis se centra en los métodos de control en línea donde se diseñan y evalúan los controladores basados en el aprendizaje por refuerzo. En particular, se presta más atención al diseño de aprendizaje de refuerzo de múltiples agentes para superar las limitaciones de los enfoques centralizados debido a la complejidad y la escalabilidad. También se analiza la comparación de políticas entre los controladores en línea y contra los límites fuera de línea,Postprint (published version

    Resource management with adaptive capacity in C-RAN

    Get PDF
    This work was supported in part by the Spanish ministry of science through the projectRTI2018-099880-B-C32, with ERFD funds, and the Grant FPI-UPC provided by theUPC. It has been done under COST CA15104 IRACON EU project.Efficient computational resource management in 5G Cloud Radio Access Network (CRAN) environments is a challenging problem because it has to account simultaneously for throughput, latency, power efficiency, and optimization tradeoffs. This work proposes the use of a modified and improved version of the realistic Vienna Scenario that was defined in COST action IC1004, to test two different scale C-RAN deployments. First, a large-scale analysis with 628 Macro-cells (Mcells) and 221 Small-cells (Scells) is used to test different algorithms oriented to optimize the network deployment by minimizing delays, balancing the load among the Base Band Unit (BBU) pools, or clustering the Remote Radio Heads (RRH) efficiently to maximize the multiplexing gain. After planning, real-time resource allocation strategies with Quality of Service (QoS) constraints should be optimized as well. To do so, a realistic small-scale scenario for the metropolitan area is defined by modeling the individual time-variant traffic patterns of 7000 users (UEs) connected to different services. The distribution of resources among UEs and BBUs is optimized by algorithms, based on a realistic calculation of the UEs Signal to Interference and Noise Ratios (SINRs), that account for the required computational capacity per cell, the QoS constraints and the service priorities. However, the assumption of a fixed computational capacity at the BBU pools may result in underutilized or oversubscribed resources, thus affecting the overall QoS. As resources are virtualized at the BBU pools, they could be dynamically instantiated according to the required computational capacity (RCC). For this reason, a new strategy for Dynamic Resource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML) techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: support vector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higher than the predicted computational capacity (PCC). For this reason, two new strategies are proposed and tested: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting (DRM-AC-ES), reducing the average of unsatisfied resources by 99.9 % and 98 % compared to the DRM-AC, respectively

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    DRL-based Energy-Efficient Baseband Function Deployments for Service-Oriented Open RAN

    Full text link
    Open Radio Access Network (Open RAN) has gained tremendous attention from industry and academia with decentralized baseband functions across multiple processing units located at different places. However, the ever-expanding scope of RANs, along with fluctuations in resource utilization across different locations and timeframes, necessitates the implementation of robust function management policies to minimize network energy consumption. Most recently developed strategies neglected the activation time and the required energy for the server activation process, while this process could offset the potential energy savings gained from server hibernation. Furthermore, user plane functions, which can be deployed on edge computing servers to provide low-latency services, have not been sufficiently considered. In this paper, a multi-agent deep reinforcement learning (DRL) based function deployment algorithm, coupled with a heuristic method, has been developed to minimize energy consumption while fulfilling multiple requests and adhering to latency and resource constraints. In an 8-MEC network, the DRL-based solution approaches the performance of the benchmark while offering up to 51% energy savings compared to existing approaches. In a larger network of 14-MEC, it maintains a 38% energy-saving advantage and ensures real-time response capabilities. Furthermore, this paper prototypes an Open RAN testbed to verify the feasibility of the proposed solution

    A Survey of Deep Learning for Data Caching in Edge Network

    Full text link
    The concept of edge caching provision in emerging 5G and beyond mobile networks is a promising method to deal both with the traffic congestion problem in the core network as well as reducing latency to access popular content. In that respect end user demand for popular content can be satisfied by proactively caching it at the network edge, i.e, at close proximity to the users. In addition to model based caching schemes learning-based edge caching optimizations has recently attracted significant attention and the aim hereafter is to capture these recent advances for both model based and data driven techniques in the area of proactive caching. This paper summarizes the utilization of deep learning for data caching in edge network. We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure. Then, a number of key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning as well as reinforcement learning. Furthermore, a comparison of state-of-the-art literature is provided from the aspects of caching topics and deep learning methods. Finally, we discuss research challenges and future directions of applying deep learning for cachin

    Edge computing infrastructure for 5G networks: a placement optimization solution

    Get PDF
    This thesis focuses on how to optimize the placement of the Edge Computing infrastructure for upcoming 5G networks. To this aim, the core contributions of this research are twofold: 1) a novel heuristic called Hybrid Simulated Annealing to tackle the NP-hard nature of the problem and, 2) a framework called EdgeON providing a practical tool for real-life deployment optimization. In more detail, Edge Computing has grown into a key solution to 5G latency, reliability and scalability requirements. By bringing computing, storage and networking resources to the edge of the network, delay-sensitive applications, location-aware systems and upcoming real-time services leverage the benefits of a reduced physical and logical path between the end-user and the data or service host. Nevertheless, the edge node placement problem raises critical concerns regarding deployment and operational expenditures (i.e., mainly due to the number of nodes to be deployed), current backhaul network capabilities and non-technical placement limitations. Common approaches to the placement of edge nodes are based on: Mobile Edge Computing (MEC), where the processing capabilities are deployed at the Radio Access Network nodes and Facility Location Problem variations, where a simplistic cost function is used to determine where to optimally place the infrastructure. However, these methods typically lack the flexibility to be used for edge node placement under the strict technical requirements identified for 5G networks. They fail to place resources at the network edge for 5G ultra-dense networking environments in a network-aware manner. This doctoral thesis focuses on rigorously defining the Edge Node Placement Problem (ENPP) for 5G use cases and proposes a novel framework called EdgeON aiming at reducing the overall expenses when deploying and operating an Edge Computing network, taking into account the usage and characteristics of the in-place backhaul network and the strict requirements of a 5G-EC ecosystem. The developed framework implements several placement and optimization strategies thoroughly assessing its suitability to solve the network-aware ENPP. The core of the framework is an in-house developed heuristic called Hybrid Simulated Annealing (HSA), seeking to address the high complexity of the ENPP while avoiding the non-convergent behavior of other traditional heuristics (i.e., when applied to similar problems). The findings of this work validate our approach to solve the network-aware ENPP, the effectiveness of the heuristic proposed and the overall applicability of EdgeON. Thorough performance evaluations were conducted on the core placement solutions implemented revealing the superiority of HSA when compared to widely used heuristics and common edge placement approaches (i.e., a MEC-based strategy). Furthermore, the practicality of EdgeON was tested through two main case studies placing services and virtual network functions over the previously optimally placed edge nodes. Overall, our proposal is an easy-to-use, effective and fully extensible tool that can be used by operators seeking to optimize the placement of computing, storage and networking infrastructure at the users’ vicinity. Therefore, our main contributions not only set strong foundations towards a cost-effective deployment and operation of an Edge Computing network, but directly impact the feasibility of upcoming 5G services/use cases and the extensive existing research regarding the placement of services and even network service chains at the edge
    corecore