7 research outputs found

    On the Fly Orchestration of Unikernels: Tuning and Performance Evaluation of Virtual Infrastructure Managers

    Full text link
    Network operators are facing significant challenges meeting the demand for more bandwidth, agile infrastructures, innovative services, while keeping costs low. Network Functions Virtualization (NFV) and Cloud Computing are emerging as key trends of 5G network architectures, providing flexibility, fast instantiation times, support of Commercial Off The Shelf hardware and significant cost savings. NFV leverages Cloud Computing principles to move the data-plane network functions from expensive, closed and proprietary hardware to the so-called Virtual Network Functions (VNFs). In this paper we deal with the management of virtual computing resources (Unikernels) for the execution of VNFs. This functionality is performed by the Virtual Infrastructure Manager (VIM) in the NFV MANagement and Orchestration (MANO) reference architecture. We discuss the instantiation process of virtual resources and propose a generic reference model, starting from the analysis of three open source VIMs, namely OpenStack, Nomad and OpenVIM. We improve the aforementioned VIMs introducing the support for special-purpose Unikernels and aiming at reducing the duration of the instantiation process. We evaluate some performance aspects of the VIMs, considering both stock and tuned versions. The VIM extensions and performance evaluation tools are available under a liberal open source licence

    Edge Computing Resource Management System: a Critical Building Block! Initiating the debate via OpenStack

    Get PDF
    International audienceWhile it is clear that edge infrastructures are required for emerging use-cases related to IoT, VR or NFV, there is currently no resource management system able to deliver all features for the edge that made cloud computing successful (e.g., an OpenStack for the edge). Since building a system from scratch is seen by many as impractical, this paper provides reflections regarding how existing solutions can be leveraged. To that end, we provide a list of the features required to operate and use edge computing resources, and investigate how an existing IaaS manager (i.e., OpenStack) satisfies these requirements. Finally, we identify from this study two approaches to design an edge infrastructure manager that fulfils our requirements, and discuss their pros and cons

    A Holistic Monitoring Service for Fog/Edge Infrastructures: a Foresight Study

    Get PDF
    International audienceAlthough academic and industry experts are now advocating for going from large-centralized Cloud Computing infrastructures to smaller ones massively distributed at the edge of the network, management systems to operate and use such infrastructures are still missing. In this paper, we focus on the monitoring service which is a key element to any management system in charge of operating a distributed infrastructure. Several solutions have been proposed in the past for cluster, grid and cloud systems. However, none is well appropriate to the Fog/Edge context. Our goal in this study, is to pave the way towards a holistic monitoring service for a Fog/Edge infrastructure hosting next generation digital services. The contributions of our work are: (i) the problem statement, (ii) a classification and a qualitative analysis of major existing solutions, and (iii) a preliminary discussion on the impact of the deployment strategy of functions composing the monitoring service

    Kubernetes et l'informatique en périphérie?

    Get PDF
    Cloud Computing infrastructures have highlighted the importance of container orchestration software to manage the life cycle of distributed applications. With the advent of Edge Computing era, DevOps expect to find features that made the success of containerized applicationsin the cloud, also at the edge. However, orchestration systems have not been designed to deal withresources geo-distribution aspects such as latency, intermittent networks, locality-awareness, etc. In other words, it is unclear whether they could be directly used on top of such massively distributed infrastructures or whether they must be revised. In this paper, we provide reflections regarding Kubernetes, the well-known container orchestration platform. More precisely, we provide two contributions. First, we discuss results we obtained during an experimental campaign we made to analyze the impact of WAN links on the vanilla Kubernetes. Second, we analyze ongoing initiatives that propose to revise part of the Kubernetes design to better address geo-distribution aspects.Les infrastructures de Cloud Computing ont mis en évidence les technologies de type conteneur et notamment des logiciels d’orchestration qui facilitent grandement la gestion du cycle de vie des applications distribuées. Avec l’avènement de l’informatique en périphérie (fog/edge computing), il est fortement envisageable que les ingĂ©nieurs DevOps attendent de trouver ces memes fonctionnalités dans ces nouvelles infrastructures. Cependant, les systèmes d’orchestration disponibles n’ont pas été conçus pour traiter les aspects de géo-distribution des ressources tels que la latence, les réseaux intermittents, la localité, etc. En d’autres termes, il n’est pas certain qu’ils puissent être utilisés directement sur des infrastructures massivement distribuées au travers plusieurs sites gĂ©o-graphiques. Dans cet article, nous présentons une Ă©tude prĂ©liminaire autour det Kubernetes, le standard open-source de-facto d’orchestration. Plus précisément, nous fournissons deux contributions. Tout d’abord, nous discutons plusieurs résultats obtenus lors d’une campagne expérimentale que nous avons menée pour analyser l’impact des liens WAN sur le code de Kubernetes natif. Deuxièmement, nous analysons les initiatives en cours qui revisitent la solution vanilla afin de mieux aborder les aspects de géo-distribution. Si ces approches pourraient être appropriées pour certains cas d’utilisation, elles sont malheureusement incomplètes pour d’autres et il convient donc de proposer de nouvelles approches

    Kubernetes et l'informatique en périphérie?

    Get PDF
    Cloud Computing infrastructures have highlighted the importance of container orchestration software to manage the life cycle of distributed applications. With the advent of Edge Computing era, DevOps expect to find features that made the success of containerized applicationsin the cloud, also at the edge. However, orchestration systems have not been designed to deal withresources geo-distribution aspects such as latency, intermittent networks, locality-awareness, etc. In other words, it is unclear whether they could be directly used on top of such massively distributed infrastructures or whether they must be revised. In this paper, we provide reflections regarding Kubernetes, the well-known container orchestration platform. More precisely, we provide two contributions. First, we discuss results we obtained during an experimental campaign we made to analyze the impact of WAN links on the vanilla Kubernetes. Second, we analyze ongoing initiatives that propose to revise part of the Kubernetes design to better address geo-distribution aspects.Les infrastructures de Cloud Computing ont mis en évidence les technologies de type conteneur et notamment des logiciels d’orchestration qui facilitent grandement la gestion du cycle de vie des applications distribuées. Avec l’avènement de l’informatique en périphérie (fog/edge computing), il est fortement envisageable que les ingĂ©nieurs DevOps attendent de trouver ces memes fonctionnalités dans ces nouvelles infrastructures. Cependant, les systèmes d’orchestration disponibles n’ont pas été conçus pour traiter les aspects de géo-distribution des ressources tels que la latence, les réseaux intermittents, la localité, etc. En d’autres termes, il n’est pas certain qu’ils puissent être utilisés directement sur des infrastructures massivement distribuées au travers plusieurs sites gĂ©o-graphiques. Dans cet article, nous présentons une Ă©tude prĂ©liminaire autour det Kubernetes, le standard open-source de-facto d’orchestration. Plus précisément, nous fournissons deux contributions. Tout d’abord, nous discutons plusieurs résultats obtenus lors d’une campagne expérimentale que nous avons menée pour analyser l’impact des liens WAN sur le code de Kubernetes natif. Deuxièmement, nous analysons les initiatives en cours qui revisitent la solution vanilla afin de mieux aborder les aspects de géo-distribution. Si ces approches pourraient être appropriées pour certains cas d’utilisation, elles sont malheureusement incomplètes pour d’autres et il convient donc de proposer de nouvelles approches

    Toward a Holistic Framework for Conducting Scientific Evaluations of OpenStack

    Get PDF
    International audienceBy massively adopting OpenStack for operating small to large private and public clouds, the industry has made it one of the largest running software project, overgrowing the Linux kernel. However, with success comes increased complexity; facing technical and scientific challenges , developers are in great difficulty when testing the impact of individual changes on the performance of such a large codebase, which will likely slow down the evolution of OpenStack. Thus, we claim it is now time for the scientific community to join the effort and get involved in the development of OpenStack, like it has been once done for Linux. In this spirit, we developed Enos, an integrated framework that relies on container technologies for deploying and evaluating OpenStack on any testbed. Enos allows researchers to easily express different configurations, enabling fine-grained investigations of OpenStack services. Enos collects performance metrics at runtime and stores them for post-mortem analysis and sharing. The relevance of the Enos approach to reproducible research is illustrated by evaluating different OpenStack scenarios on the Grid'5000 testbed
    corecore