1,441 research outputs found

    GNFC: Towards Network Function Cloudification

    Get PDF
    An increasing demand is seen from enterprises to host and dynamically manage middlebox services in public clouds in order to leverage the same benefits that network functions provide in traditional, in-house deployments. However, today's public clouds provide only a limited view and programmability for tenants that challenges flexible deployment of transparent, software-defined network functions. Moreover, current virtual network functions can't take full advantage of a virtualized cloud environment, limiting scalability and fault tolerance. In this paper we review and evaluate the current infrastructural limitations imposed by public cloud providers and present the design and implementation of GNFC, a cloud-based Network Function Virtualization (NFV) framework that gives tenants the ability to transparently attach stateless, container-based network functions to their services hosted in public clouds. We evaluate the proposed system over three public cloud providers (Amazon EC2, Microsoft Azure and Google Compute Engine) and show the effects on end-to-end latency and throughput using various instance types for NFV hosts

    Container-based network function virtualization for software-defined networks

    Get PDF
    Today's enterprise networks almost ubiquitously deploy middlebox services to improve in-network security and performance. Although virtualization of middleboxes attracts a significant attention, studies show that such implementations are still proprietary and deployed in a static manner at the boundaries of organisations, hindering open innovation. In this paper, we present an open framework to create, deploy and manage virtual network functions (NF)s in OpenFlow-enabled networks. We exploit container-based NFs to achieve low performance overhead, fast deployment and high reusability missing from today's NFV deployments. Through an SDN northbound API, NFs can be instantiated, traffic can be steered through the desired policy chain and applications can raise notifications. We demonstrate the systems operation through the development of exemplar NFs from common Operating System utility binaries, and we show that container-based NFV improves function instantiation time by up to 68% over existing hypervisor-based alternatives, and scales to one hundred co-located NFs while incurring sub-millisecond latency

    VIoLET: A Large-scale Virtual Environment for Internet of Things

    Full text link
    IoT deployments have been growing manifold, encompassing sensors, networks, edge, fog and cloud resources. Despite the intense interest from researchers and practitioners, most do not have access to large-scale IoT testbeds for validation. Simulation environments that allow analytical modeling are a poor substitute for evaluating software platforms or application workloads in realistic computing environments. Here, we propose VIoLET, a virtual environment for defining and launching large-scale IoT deployments within cloud VMs. It offers a declarative model to specify container-based compute resources that match the performance of the native edge, fog and cloud devices using Docker. These can be inter-connected by complex topologies on which private/public networks, and bandwidth and latency rules are enforced. Users can configure synthetic sensors for data generation on these devices as well. We validate VIoLET for deployments with > 400 devices and > 1500 device-cores, and show that the virtual IoT environment closely matches the expected compute and network performance at modest costs. This fills an important gap between IoT simulators and real deployments.Comment: To appear in the Proceedings of the 24TH International European Conference On Parallel and Distributed Computing (EURO-PAR), August 27-31, 2018, Turin, Italy, europar2018.org. Selected as a Distinguished Paper for presentation at the Plenary Session of the conferenc

    A study on performance measures for auto-scaling CPU-intensive containerized applications

    Get PDF
    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented

    Server Structure Proposal and Automatic Verification Technology on IaaS Cloud of Plural Type Servers

    Get PDF
    In this paper, we propose a server structure proposal and automatic performance verification technology which proposes and verifies an appropriate server structure on Infrastructure as a Service (IaaS) cloud with baremetal servers, container based virtual servers and virtual machines. Recently, cloud services have been progressed and providers provide not only virtual machines but also baremetal servers and container based virtual servers. However, users need to design an appropriate server structure for their requirements based on 3 types quantitative performances and users need much technical knowledge to optimize their system performances. Therefore, we study a technology which satisfies users' performance requirements on these 3 types IaaS cloud. Firstly, we measure performances of a baremetal server, Docker containers, KVM (Kernel based Virtual Machine) virtual machines on OpenStack with virtual server number changing. Secondly, we propose a server structure proposal technology based on the measured quantitative data. A server structure proposal technology receives an abstract template of OpenStack Heat and function/performance requirements and then creates a concrete template with server specification information. Thirdly, we propose an automatic performance verification technology which executes necessary performance tests automatically on provisioned user environments according to the template.Comment: Evaluations of server structure proposal were insufficient in section

    Käyttäjätason ohjelmistokontittaminen pilviradioliityntäverkossa

    Get PDF
    The amount of devices connected through mobile networks has been growing rapidly. This growth will create a demand for network capacity that cannot be met with traditional methods. This problem could be solved by implementing a cloud radio access network (RAN), a new concept, to adapt cloud computing technologies, such as software containers, from the software industry to RANs. This adaptation will also create a need to modify working practices in order to better comply with these new cloud computing technologies. While cloud RAN has recently received much research attention, the actual software implementations have not been widely discussed in the literature. Therefore, this thesis evaluates the feasibility of using software containers in the user-plane applications of cloud RAN in terms of networking and inter-container communications (ICC). This is accomplished by identifying potential approaches for ICC and for container networking as well as measuring the performance of these approaches. Two approaches are proposed for ICC and container networking. The approaches were evaluated in terms of throughput and latency. These approaches were found to be suitable for use in cloud RAN user-plane applications. However, since the measurements were performed in a simplified environment, implementing the approaches into a cloud RAN component will require further work.Mobiiliverkkoihin liitettävien laitteiden määrä kasvaa nopeasti. Tämä kasvu tulee luomaan verkon kapasiteetille kysynnän, johon ei kyetä vastaamaan perinteisin menetelmin. Tämä ongelma voitaineen ratkaista implementoimalla pilviradioliityntäverkko (Cloud RAN), uusi konsepti, joka sovittaa ohjelmistoalalla vakiintuneita pilvilaskentateknologioita käytettäväksi radioliityntäverkoissa (radio access network, RAN). Tämä sovitusprosessi luo tarpeen mukauttaa myös työskentelytavat yhteensopiviksi uusien pilvilaskentateknologioiden kanssa. Vaikka pilviradioliityntäverkkoa on tutkittu aktiivisesti viime aikoina, käytännön ohjelmistototeutukset eivät juuri ole olleet esillä kirjallisuudessa. Tämä diplomityö arvioi ohjelmistokonttien (software containers) soveltuvuutta käytettäväksi pilviradioliityntäverkon käyttäjätason (user-plane) applikaatioissa verkottamisen (networking) ja ohjelmistokonttien välisen kommunikoinnin (inter-container communications, ICC) suhteen. Tämä arviointi suoritetaan identifioimalla mahdollisia toteutuksia ohjelmistokonttien väliselle kommunikaatiolle ja ohjelmistokonttien verkottamiselle sekä mittaamalla näiden toteutuksien suorituskyky. Tässä diplomityössä ehdotetaan tutkittavaksi kaksi toteutusta ohjelmistokonttien väliselle kommunikaatiolle ja ohjelmistokonttien verkottamiselle. Nämä toteutukset arvioitiin välityskyvyn (throughput) ja latenssin suhteen. Näiden toteutuksien todettiin olevan soveliaita käytettäväksi pilviradioliityntäverkon käyttäjätason applikaatioissa. Kuitenkin, koska mittaukset toteutettiin yksinkertaistetussa ympäristössä, vaatii toteutuksien implementointi pilviradioliityntäverkon komponenttiin lisätyötä

    Evaluating Performance of Serverless Virtualization

    Get PDF
    Abstract. The serverless computing has posed new challenges for cloud vendors that are difficult to solve with existing virtualization technologies. Maintaining security, resource isolation, backwards compatibility and scalability is extremely difficult when the platform should be able to deliver native performance. This paper contains a literature review of recently published results related to the performance of virtualization technologies such as KVM and Docker, and further reports a DESMET benchmarking evaluation against KVM and Docker, as well as Firecracker and gVisor, which are being used by Amazon Web Services and Google Cloud in their cloud services. The context for this research is coming from education, where students return their programming assignments into a source code repository system that further triggers automated tests and potentially other tasks against the submitted code. The used environment consists of several software components, such as web server, database and job executor, and thus represents a common architecture in web-based applications. The results of the research show that Docker is still the most performant virtualization technology amongst the selected ones. Additionally, Firecracker and gVisor perform better in some areas than KVM and thus are viable options for single-tenant environments. Lastly, applications that run untrusted code or have otherwise really high security requirements could potentially leverage from using either Firecracker or gVisor
    corecore