213 research outputs found

    Resource Calendaring for Mobile Edge Computing in 5G Networks

    Get PDF
    Mobile Edge Computing (MEC) is a key technology for the deployment of next generation (5G and beyond) mobile networks, specifically for reducing the latency experienced by mobile users which require ultra-low latency, high bandwidth, as well as real-time access to the radio network. In this paper, we propose an optimization framework that considers several key aspects of the resource allocation problem for MEC, by carefully modeling and optimizing the allocation of network resources including computation and storage capacity available on network nodes as well as link capacity. Specifically, both an exact optimization model and an effective heuristic are provided, jointly optimizing (1) the connections admission decision (2) their scheduling, also called calendaring (3) and routing as well as (4) the decision of which nodes will serve such connections and (5) the amount of processing and storage capacity reserved on the chosen nodes. Numerical experiments are conducted in several real-size network scenarios, which demonstrate that the heuristic performs close to the optimum in all the considered network scenarios, while exhibiting a low computing time

    Deep Reinforcement Learning for Performance-Aware Adaptive Resource Allocation in Mobile Edge Computing

    Get PDF
    © 2020 Binbin Huang et al. Mobile edge computing (MEC) enables to provide relatively rich computing resources in close proximity to mobile users, which enables resource-limited mobile devices to offload workloads to nearby edge servers, and thereby greatly reducing the processing delay of various mobile applications and the energy consumption of mobile devices. Despite its advantages, when a large number of mobile users simultaneously offloads their computation tasks to an edge server, due to the limited computation and communication resources of edge server, inefficiency resource allocation will not make full use of the limited resource and cause waste of resource, resulting in low system performance (the weighted sum of the number of processed tasks, the number of punished tasks, and the number of dropped tasks). Therefore, it is a challenging problem to effectively allocate the computing and communication resources to multiple mobile users. To cope with this problem, we propose a performance-aware resource allocation (PARA) scheme, the goal of which is to maximize the long-term system performance. More specifically, we first build the multiuser resource allocation architecture for computing workloads and transmitting result data to mobile devices. Then, we formulate the multiuser resource allocation problem as a Markova Decision Process (MDP). To achieve this problem, a performance-aware resource allocation (PARA) scheme based on a deep deterministic policy gradient (DDPG) is adopted to derive optimal resource allocation policy. Finally, extensive simulation experiments demonstrate the effectiveness of the PARA scheme

    Integration of Clouds to Industrial Communication Networks

    Get PDF
    Cloud computing, owing to its ubiquitousness, scalability and on-demand ac- cess, has transformed into many traditional sectors, such as telecommunication and manufacturing production. As the Fifth Generation Wireless Specifica- tions (5G) emerges, the demand on ubiquitous and re-configurable computing resources for handling tremendous traffic from omnipresent mobile devices has been put forward. And therein lies the adaption of cloud-native model in service delivery of telecommunication networks. However, it takes phased approaches to successfully transform the traditional Telco infrastructure to a softwarized model, especially for Radio Access Networks (RANs), which, as of now, mostly relies on purpose-built Digital Signal Processors (DSPs) for computing and processing tasks.On the other hand, Industry 4.0 is leading the digital transformation in manufacturing sectors, wherein the industrial networks is evolving towards wireless connectivity and the automation process managements are shifting to clouds. However, such integration may introduce unwanted disturbances to critical industrial automation processes. This leads to challenges to guaran- tee the performance of critical applications under the integration of different systems.In the work presented in this thesis, we mainly explore the feasibility of inte- grating wireless communication, industrial networks and cloud computing. We have mainly investigated the delay-inhibited challenges and the performance impacts of using cloud-native models for critical applications. We design a solution, targeting at diminishing the performance degradation caused by the integration of cloud computing

    Orchestration and Scheduling of Resources in Softwarized Networks

    Get PDF
    The Fifth Generation (5G) era is touted as the next generation of mobile networks that will unleash new services and network capabilities, opening up a whole new line of businesses recognized by a top-notch Quality of Service (QoS) and Quality of Experience (QoE) empowered by many recent advancements in network softwarization and providing an innovative on-demand service provisioning on a shared underlying network infrastructure. 5G networks will support the immerse explosion of the Internet of Things (IoT) incurring an expected growth of billions of connected IoT devices by 2020, providing a wide range of services spanning from low-cost sensor-based metering services to low-latency communication services touching health, education and automotive sectors among others. Mobile operators are striving to find a cost effective network solution that will enable them to continuously and automatically upgrade their networks based on their ever growing customers demands in the quest of fulfilling the new rising opportunities of offering novel services empowered by the many emerging IoT devices. Thus, departing from the shortfalls of legacy hardware (i.e., high cost, difficult management and update, etc.) and learning from the different advantages of virtualization technologies which enabled the sharing of computing resources in a cloud environment, mobile operators started to leverage the idea of network softwarization through several emerging technologies. Network Function Virtualization (NFV) promises an ultimate Capital Expenditures (CAPEX) reduction and high flexibility in resource provisioning and service delivery through replacing hardware equipment by software. Software Defined Network (SDN) offers network and mobile operators programmable traffic management and delivery. These technologies will enable the launch of Multi-Access Edge Computing (MEC) paradigm that promises to complete the 5G networks requirements in providing low-latency services by bringing the computing resources to the edge of the network, in close vicinity of the users, hence, assisting the limited capabilities of their IoT devices in delivering their needed services. By leveraging network softwarization, these technologies will initiate a tremendous re-design of current networks that will be transformed to self-managed, software-based networks exploiting multiple benefits ranging from flexibility, programmability, automation, elasticity among others. This dissertation attempts to elaborate and address key challenges related to enabling the re-design of current networks to support a smooth integration of the NFV and MEC technologies. This thesis provides a profound understanding and novel contributions in resource and service provisioning and scheduling towards enabling efficient resource and network utilization of the underlying infrastructure by leveraging several optimization and game theoretic techniques. In particular, we first, investigate the interplay existing between network function mapping, traffic routing and Network Service (NS) scheduling in NFV-based networks and present a Column Generation (CG) decomposition method to solve the problem with considerable runtime improvement over mathematical-based formulations. Given the increasing interest in providing low-latency services and the correlation existing between this objective and the goal of network operators in maximizing their network admissibility through efficiently utilizing their network resources, we revisit the latter problem and tackle it under different assumptions and objectives. Given its complexity, we present a novel game theoretic approach that is able to provide a bounded solution of the problem. Further, we extend our work to the network edge where we promote network elasticity and alleviate virtualization technologies by addressing the problem of task offloading and scheduling along with the IoT application resource allocation problem. Given the complexity of the problem, we propose a Logic-Based Benders (LBBD) decomposition method to efficiently solve it to optimality

    Resource Allocation in Networking and Computing Systems: A Security and Dependability Perspective

    Get PDF
    In recent years, there has been a trend to integrate networking and computing systems, whose management is getting increasingly complex. Resource allocation is one of the crucial aspects of managing such systems and is affected by this increased complexity. Resource allocation strategies aim to effectively maximize performance, system utilization, and profit by considering virtualization technologies, heterogeneous resources, context awareness, and other features. In such complex scenario, security and dependability are vital concerns that need to be considered in future computing and networking systems in order to provide the future advanced services, such as mission-critical applications. This paper provides a comprehensive survey of existing literature that considers security and dependability for resource allocation in computing and networking systems. The current research works are categorized by considering the allocated type of resources for different technologies, scenarios, issues, attributes, and solutions. The paper presents the research works on resource allocation that includes security and dependability, both singularly and jointly. The future research directions on resource allocation are also discussed. The paper shows how there are only a few works that, even singularly, consider security and dependability in resource allocation in the future computing and networking systems and highlights the importance of jointly considering security and dependability and the need for intelligent, adaptive and robust solutions. This paper aims to help the researchers effectively consider security and dependability in future networking and computing systems.publishedVersio

    From geographically dispersed data centers towards hierarchical edge computing

    Get PDF
    Internet scale data centers are generally dispersed in different geographical regions. While the main goal of deploying the geographically dispersed data centers is to provide redundancy, scalability and high availability, the geographic dispersity provides another opportunity for efficient employment of global resources, e.g., utilizing price-diversity in electricity markets or utilizing locational diversity in renewable power generation. In other words, an efficient approach for geographical load balancing (GLB) across geo-dispersed data centers not only can maximize the utilization of green energy but also can minimize the cost of electricity. However, due to the different costs and disparate environmental impacts of the renewable energy and brown energy, such a GLB approach should tap on the merits of the separation of green energy utilization maximization and brown energy cost minimization problems. To this end, the notion of green workload and green service rate, versus brown workload and brown service rate, respectively, to facilitate the separation of green energy utilization maximization and brown energy cost minimization problems is proposed. In particular, a new optimization framework to maximize the profit of running geographically dispersed data centers based on the accuracy of the G/D/1 queueing model, and taking into consideration of multiple classes of service with individual service level agreement deadline for each type of service is developed. A new information flow graph based model for geo-dispersed data centers is also developed, and based on the developed model, the achievable tradeoff between total and brown power consumption is characterized. Recently, the paradigm of edge computing has been introduced to push the computing resources away from the data centers to the edge of the network, thereby reducing the communication bandwidth requirement between the sources of data and the data centers. However, it is still desirable to investigate how and where at the edge of the network the computation resources should be provisioned. To this end, a hierarchical Mobile Edge Computing (MEC) architecture in accordance with the principles of LTE Advanced backhaul network is proposed and an auction-based profit maximization approach which effectively facilitates the resource allocation to the subscribers of the MEC network is designed. A hierarchical capacity provisioning framework for MEC that optimally budgets computing capacities at different hierarchical edge computing levels is also designed. The proposed scheme can efficiently handle the peak loads at the access point locations while coping with the resource poverty at the edge. Moreover, the code partitioning problem is extended to a scheduling problem over time and the hierarchical mobile edge network, and accordingly, a new technique that leads to the optimal code partitioning in a reasonable time even for large-sized call trees is proposed. Finally, a novel NOMA augmented edge computing model that captures the gains of uplink NOMA in MEC users\u27 energy consumption is proposed
    corecore