1,772 research outputs found

    Characterizing Docker Overhead in Mobile Edge Computing Scenarios

    Full text link
    Mobile Edge Computing (MEC) is an emerging network paradigm that provides cloud and IT services at the point of access of the network. Such proximity to the end user translates into ultra-low latency and high bandwidth, while, at the same time, it alleviates traffic congestion in the network core. Due to the need to run servers on edge nodes (e.g., an LTE-A macro eNodeB), a key element of MEC architectures is to ensure server portability and low overhead. A possible tool that can be used for this purpose is Docker, a framework that allows easy, fast deployment of Linux containers. This paper addresses the suitability of Docker in MEC scenar- ios by quantifying the CPU consumed by Docker when running two different containerized services: multiplayer gam- ing and video streaming. Our tests, run with varying numbers of clients and servers, yield different results for the two case studies: for the gaming service, the overhead logged by Docker increases only with the number of servers; con- versely, for the video streaming case, the overhead is not affected by the number of either clients or servers.Comment: 6 Pages, 9 images, 2 table

    A HyperNet Architecture

    Get PDF
    Network virtualization is becoming a fundamental building block of future Internet architectures. By adding networking resources into the “cloud”, it is possible for users to rent virtual routers from the underlying network infrastructure, connect them with virtual channels to form a virtual network, and tailor the virtual network (e.g., load application-specific networking protocols, libraries and software stacks on to the virtual routers) to carry out a specific task. In addition, network virtualization technology allows such special-purpose virtual networks to co-exist on the same set of network infrastructure without interfering with each other. Although the underlying network resources needed to support virtualized networks are rapidly becoming available, constructing a virtual network from the ground up and using the network is a challenging and labor-intensive task, one best left to experts. To tackle this problem, we introduce the concept of a HyperNet, a pre-built, pre-configured network package that a user can easily deploy or access a virtual network to carry out a specific task (e.g., multicast video conferencing). HyperNets package together the network topology configuration, software, and network services needed to create and deploy a custom virtual network. Users download HyperNets from HyperNet repositories and then “run” them on virtualized network infrastructure much like users download and run virtual appliances on a virtual machine. To support the HyperNet abstraction, we created a Network Hypervisor service that provides a set of APIs that can be called to create a virtual network with certain characteristics. To evaluate the HyperNet architecture, we implemented several example Hyper-Nets and ran them on our prototype implementation of the Network Hypervisor. Our experiments show that the Hypervisor API can be used to compose almost any special-purpose network – networks capable of carrying out functions that the current Internet does not provide. Moreover, the design of our HyperNet architecture is highly extensible, enabling developers to write high-level libraries (using the Network Hypervisor APIs) to achieve complicated tasks

    Mitigating the effects of vendor lock-in in edge cloud environments with open-source technologies

    Get PDF
    Cloud computing has been in the center of attention recently. Its popularity has increased significantly. More and more companies decide to use a cloud for running their applications. However, this introduces certain problems, such as vendor lock-in. Without a widely used standard, the systems become incompatible with each other. This thesis introduces a way to reduce the risk of vendor lock-in and uses open source technologies in order to make it available to as many people as possible. The explored solution is easy-to-use and light-weight compared to other ones. Furthermore, the use of certain technologies over others is suggested in the thesis to further reduce the risks of being locked to a single cloud provider

    Live Service Migration in Mobile Edge Clouds

    No full text
    Mobile edge clouds (MECs) bring the benefits of the cloud closer to the user, by installing small cloud infrastructures at the network edge. This enables a new breed of real-time applications, such as instantaneous object recognition and safety assistance in intelligent transportation systems, that require very low latency. One key issue that comes with proximity is how to ensure that users always receive good performance as they move across different locations. Migrating services between MECs is seen as the means to achieve this. This article presents a layered framework for migrating active service applications that are encapsulated either in virtual machines (VMs) or containers. This layering approach allows a substantial reduction in service downtime. The framework is easy to implement using readily available technologies, and one of its key advantages is that it supports containers, which is a promising emerging technology that offers tangible benefits over VMs. The migration performance of various real applications is evaluated by experiments under the presented framework. Insights drawn from the experimentation results are discussed

    Efficient GPU Cloud architectures for outsourcing high-performance processing to the Cloud

    Get PDF
    The world is becoming increasingly dependant in computing intensive applications. The appearance of new paradigms, such as Internet of Things (IoT), and advances in technologies such as Computer Vision (CV) and Artificial Intelligence (AI) are creating a demand for high-performance applications. In this regard, Graphics Processing Units (GPUs) have the ability to provide better performance by allowing a high degree of data parallelism. These devices are also beneficial in specialized fields of manufacturing industry such as CAD/CAM. For all these applications, there is a recent tendency to offload these computations to the Cloud, using a computing offloading Cloud architecture. However, the use of GPUs in the Cloud presents some inefficiencies, where GPU virtualization is still not fully resolved, as our research on what main Cloud providers currently offer in terms of GPU Cloud instances shows. To address these problems, this paper first makes a review of current GPU technologies and programming techniques that increase concurrency, to then propose a Cloud computing outsourcing architecture to make more efficient use of these devices in the Cloud.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work was supported by the Spanish Research Agency (AEI) under project HPC4Industry PID2020-120213RB-I00
    corecore