5 research outputs found

    Intent-based zero-touch service chaining layer for software-defined edge cloud networks

    Get PDF
    Edge Computing, along with Software Defined Networking and Network Function Virtualization, are causing network infrastructures to become as distributed clouds extended to the edge with services provided as dynamically established sequences of virtualized functions (i.e., dynamic service chains) thereby elastically addressing different processing requirements of application data flows. However, service operators and application developers are not inclined to deal with descriptive configuration directives to establish and operate services, especially in case of service chains. Intent-based Networking is emerging as a novel approach that simplifies network management and automates the implementation of network operations required by applications. This paper presents an intent-based zero-touch service chaining layer that provides the programmable provision of service chain paths in edge cloud networks. In addition to the dynamic and elastic deployment of data delivery services, the intent-based layer offers an automated adaptation of the service chains paths according to the application's goals expressed in the intent to recover from sudden congestion events in the SDN network. Experiments have been carried out in an emulated network environment to show the feasibility of the approach and to evaluate the performance of the intent layer in terms of network resource usage and adaptation overhead

    Mobile Edge Cloud Network Design Optimization

    Get PDF
    Major interest is currently given to the integration of clusters of virtualization servers, also referred to as 'cloudlets' or 'edge clouds', into the access network to allow higher performance and reliability in the access to mobile edge computing services. We tackle the edge cloud network design problem for mobile access networks. The model is such that the virtual machines (VMs) are associated with mobile users and are allocated to cloudlets. Designing an edge cloud network implies first determining where to install cloudlet facilities among the available sites, then assigning sets of access points, such as base stations to cloudlets, while supporting VM orchestration and considering partial user mobility information, as well as the satisfaction of service-level agreements. We present link-path formulations supported by heuristics to compute solutions in reasonable time. We qualify the advantage in considering mobility for both users and VMs as up to 20% less users not satisfied in their SLA with a little increase of opened facilities. We compare two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration, while bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints

    Computation offloading in mobile edge computing: an optimal stopping theory approach

    Get PDF
    In recent years, new mobile devices and applications with different functionalities and uses, such as drones, Autonomous Vehicles (AV) and highly advanced smartphones have emerged. Such devices are now able to launch applications such as augmented and virtual reality, intensive contextual data processing, intelligent vehicle control, traffic management, data mining and interactive applications. Although these mobile nodes have the computing and communication capabilities to run such applications, they remain unable to efficiently handle them mainly due to the significant processing required over relatively short timescales. Additionally, they consume a considerable amount of battery power. Such limitations have motivated the idea of computation offloading where computing tasks are sent to the Cloud instead of executing it locally at the mobile node. The technical concept of this idea is referred to as Mobile Cloud Computing (MCC). However, using the Cloud for computational task offloading of mobile applications introduces a significant latency and adds additional load to the radio and backhaul of the mobile networks. To cope with these challenges, the Cloud’s resources are being deployed near to the users at the Edge of the network in places such as mobile networks at the Base Station (BS), or indoor locations such as Wi-Fi and 3G/4G access points. This architecture is referred to as Mobile Edge Computing or Multi-access Edge Computing (MEC). Computation offloading in such a setting faces the challenge of deciding which time and server to offload computational tasks to. This dissertation aims at designing time-optimised task offloading decision-making algorithms in MEC environments. This will be done to find the optimal time for task offloading. The random variables that can influence the expected processing time at the MEC server are investigated using various probability distributions and representations. In the context being assessed, while the mobile node is sequentially roaming (connecting) through a set of MEC servers, it has to locally and autonomously decide which server should be used for offloading in order to perform the computing task. To deal with this sequential problem, the considered offloading decision-making is modelled as an optimal stopping time problem adopting the principles of Optimal Stopping Theory (OST). Three assessment approaches including simulation approach, real data sets and an actual implementation in real devices, are used to evaluate the performance of the models. The results indicate that OST-based offloading strategies can play an important role in optimising the task offloading decision. In particular, in the simulation approach, the average processing time achieved by the proposed models are higher than the Optimal by only 10%. In the real data set, the models are still near optimal with only 25% difference compared to the Optimal while in the real implementation, the models, most of the time, select the Optimal node for processing the task. Furthermore, the presented algorithms are lightweight, local and can hence be implemented on mobile nodes (for instance, vehicles or smart phones)

    Linking Virtual Machine Mobility to User Mobility

    No full text
    International audienceCloud applications heavily rely on the network communication infrastructure, whose stability and latency directly affect the quality of experience. As mobile devices need to rapidly retrieve data from the cloud, it becomes an extremely important goal to deliver the lowest possible access latency at the best reliability. In this paper, we specify a cloud access overlay protocol architecture to improve the cloud access performance in distributed data-center (DC) cloud fabrics. We explore how linking virtual machine (VM) mobility and routing to user mobility can compensate performance decrease due to increased user-cloud network distance, by building an online cloud scheduling solution to optimally switch VM routing locators and to relocate VMs across DC sites, as a function of user-DC overlay network states. We evaluate our solution: 1) on a real distributed DC testbed spanning all of France, showing that we can grant a very high transfer time gain and 2) by emulating the situation of Internet service providers (ISPs) and over-the-top (OTT) cloud providers, exploiting thousands of real France-wide user displacement traces, finding a median throughput gain from 30% for OTT scenarii to 40% for ISP scenarii, the large majority of this gain being granted by adaptive VM mobility
    corecore