317 research outputs found
KuberneTSN: a Deterministic Overlay Network for Time-Sensitive Containerized Environments
The emerging paradigm of resource disaggregation enables the deployment of
cloud-like services across a pool of physical and virtualized resources,
interconnected using a network fabric. This design embodies several benefits in
terms of resource efficiency and cost-effectiveness, service elasticity and
adaptability, etc. Application domains benefiting from such a trend include
cyber-physical systems (CPS), tactile internet, 5G networks and beyond, or
mixed reality applications, all generally embodying heterogeneous Quality of
Service (QoS) requirements. In this context, a key enabling factor to fully
support those mixed-criticality scenarios will be the network and the
system-level support for time-sensitive communication. Although a lot of work
has been conducted on devising efficient orchestration and CPU scheduling
strategies, the networking aspects of performance-critical components remain
largely unstudied. Bridging this gap, we propose KuberneTSN, an original
solution built on the Kubernetes platform, providing support for time-sensitive
traffic to unmodified application binaries. We define an architecture for an
accelerated and deterministic overlay network, which includes kernel-bypassing
networking features as well as a novel userspace packet scheduler compliant
with the Time-Sensitive Networking (TSN) standard. The solution is implemented
as tsn-cni, a Kubernetes network plugin that can coexist alongside popular
alternatives. To assess the validity of the approach, we conduct an
experimental analysis on a real distributed testbed, demonstrating that
KuberneTSN enables applications to easily meet deterministic deadlines,
provides the same guarantees of bare-metal deployments, and outperforms overlay
networks built using the Flannel plugin.Comment: 6 page
Big data deployment in containerized infrastructures through the interconnection of network namespaces
Big Data applications tackle the challenge of fast handling of large streams of data. Their performance is not only dependent on the data frameworks implementation and the underlying hardware but also on the deployment scheme and its potential for fast scaling. Consequently, several efforts have focused on the ease of deployment of Big Data applications, notably through the use of containerization. This technology was indeed raised to bring multitenancy and multiprocessing out of clusters, providing high deployment flexibility through lightweight container images. Recent studies have focused mostly on Docker containers. Notwithstanding, this article is actually interested in recent Singularity containers as they provide more security and support high-performance computing (HPC) environments and, in this way, they can make Big Data applications benefit from the specialized hardware of HPC. Singularity 2.x, however, does not isolate network resources as required by most Big Data components. Singularity 3.x allows allocating each container with isolated network resources, but their interconnection requires a nontrivial amount of configuration effort. In this context, this article makes a functional contribution in the form of a deployment scheme based on the interconnection of network namespaces, through underlay and overlay networking approaches, to make Big Data applications easily deployable inside Singularity containers. We provide detailed account of our deployment scheme when using both interconnection approaches in the form of a “how-to-do-it” report, and we evaluate it by comparing three Big Data applications based on Hadoop when performing on a bare-metal infrastructure and on scenarios involving Singularity and Docker instances.Peer ReviewedPostprint (author's final draft
Demystifying container networking
A cluster of containerized workloads is a complex system where stacked layers of plugins and interfaces can quickly hide what’s actually going on under the hood. This can result in incorrect assumptions, security incidents, and other disasters. With a networking viewpoint, this paper dives into the Linux networking subsystem to demystify how container networks are built on Linux systems. This knowledge of "how" allows then to understand the different networking features of Kubernetes, Docker, or any other containerization solution developed in the future
Design and analysis of fully virtualized cellular networks based on open-source frameworks
Objectius de Desenvolupament Sostenible::9 - Indústria, Innovació i InfraestructuraObjectius de Desenvolupament Sostenible::17 - Aliança per a Aconseguir els Objetiu
A Link-Layer Virtual Networking Solution for Cloud-Native Network Function Virtualisation Ecosystems: L2S-M
Microservices have become promising candidates for the deployment of network and vertical functions in the fifth generation of mobile networks. However, microservice platforms like Kubernetes use a flat networking approach towards the connectivity of virtualised workloads, which prevents the deployment of network functions on isolated network segments (for example, the components of an IP Telephony system or a content distribution network). This paper presents L2S-M, a solution that enables the connectivity of Kubernetes microservices over isolated link-layer virtual networks, regardless of the compute nodes where workloads are actually deployed. L2S-M uses
software-defined networking (SDN) to fulfil this purpose. Furthermore, the L2S-M design is flexible to support the connectivity of Kubernetes workloads across different Kubernetes clusters. We validate the functional behaviour of our solution in a moderately complex Smart Campus scenario, where L2S-M is used to deploy a content distribution network, showing its potential for the deployment of network services in distributed and heterogeneous environments.This article has partially been supported by the H2020 FISHY Project (Grant agreement ID: 952644) and by the TRUE5G project (PID2019-108713RB681) funded by the Spanish National Research Agency (MCIN/AEI/10.13039/5011000110)
Tapping network traffic in Kubernetes
The rapid increase in cloud usage among organizations has led to a shift in the cybersecurity
industry. Whereas before, organizations wanted traditional security monitoring using statically placed IDS sensors within their data centers and networks, they now want dynamic
security monitoring of their cloud solutions. As more and more organizations move their
infrastructure and applications to the cloud the need for cybersecurity solutions that can
adapt and transform to meet this new demand is increasing. Although many cloud providers,
provide integrated security solutions, these are dependent on the correct configuration from
the customers, which may rather want to pay a security firm instead. Telenor Security Operation Center is a long contender in the traditional cybersecurity firm space and is looking
to move into IDS monitoring of cloud solutions, more specifically providing network IDS
monitoring of traffic within managed Kubernetes clusters at cloud providers, such as Amazon Web Services Elastic Kubernetes Service. This is to be accomplished by providing all
the desired pods within a cluster their own sidecar container, which acts as a network sniffer
that sends the recorded traffic through vxlan to an external sensor also operating in the
cloud. By doing this, traditional IDS monitoring suddenly becomes available in the cloud,
and is covering a part that is often neglected in cloud environments, and that is monitoring
the internal Kubernetes cluster traffic.
AWS EKS was used as a testing ground for a simulated Kubernetes cluster running sample applications monitored by the sidecar container. Which is essentially a Python script
sniffing the localhost traffic of the shared network namespace of a Kubernetes pod. This
infrastructure will be generated by a set of Terraform files for automated setup and reproducibility, as well as making use of the gitops tool Fluxcd for syncing Kubernetes manifests.
The solution will also be monitored by a complete monitoring solution in the form of kube-prometheus-stack which will provide complete insight into performance metrics down at the
container level, through Prometheus and Grafana. Finally, a series of performance tests will
be conducted, using k6s and iperf, automated by Ansible, to gather the performance impact
of the sidecar container.
A series of iperf and k6s tests were conducted against the sidecar container. The k6s test
was run at a data rate of 3 Mb/s and showed that the data rate needed to be higher to
gather useful performance metrics. This is where iperf took over and tested the sidecar
container at data rates of 50,100,250 and 500 Mb/s using a server at the University of Agder
as base. These initial raw performance results showed a max CPU usage of 11.8% of the
Kubernetes node’s 2 vCPU’s. Together with a max memory usage of 14 MB this showed
that the sidecar container does not consume a vast amount of resources. And has the
potential as a scalable and efficient network tapping method in Kubernetes. However, some
anomalies were discovered during the performance testing that revealed undiscovered issues
with the method. One of which was packet anomalies between the number of packets at the
sensor and the number of packets observed by the iperf server at the University of Agder.
Due to the many layers involved in the networking stack for this method, there needs to be conducted additional research into how these anomalies arise. While also considering
alternative transport methods to vxlan
Orchestration Mechanism Impact on Virtual Network Function Throughput
Virtual Network Function (VNF) has gained importance in the IT industry, especially in the telecommunication industry, because a VNF runs network services in commodity hardware instead of dedicated hardware, thereby increasing the scalability and agility. The container technology is a useful tool for the VNF because it is lightweight, portable and scalable. The container technology shortens the product development cycle by easing the service deployment and maintenance. The telecommunication industry uses service uptime as an important gauge to evaluate if a service is of carrier grade, and keeping services up and running generates most of the maintenance costs. These costs can be reduced by container orchestration such as Kubernetes. Kubernetes handles the automation of deployment, scaling and management for applications with the help of orchestration mechanisms, such as the scheduler and load-balancers. As a result of those mechanisms, the VNFs running in a Kubernetes cluster can reach high availability and flexibility. However, the impact of the mechanisms on VNF throughput has not been studied in detail.
The objective of this thesis is to evaluate the influence of Kubernetes orchestration mechanisms on VNF throughput and Quality of Service (QoS). This objective is achieved by means of measurements run with a packet-forwarding service in a Kubernetes cluster.
Based on the evaluations, it is concluded that the VNF throughput is dependent on 6 parameters: CPU types, CPU isolation, number of Pods, location of Pods, location of load-balancer controllers, and load-balancing techniques
Network isolation for Kubernetes hard multi-tenancy
Over the past decade, containerization is increasingly popular due to its advantages in performance compared to virtualization. The rise in the use of containers leads to the emergence of container orchestration tools. Kubernetes is one of the top widely used tools serving this purpose. One critical point in the design of this tool is that one cluster can only serve one tenant. As the number of Kubernetes users is continuously increasing, this model generates considerate management overheads and resource fragmentation to the cluster. As a result, multi-tenancy was introduced as an alternative model. However, the major problem of this approach is the isolation between tenants. This thesis aims to tackle this isolation issue. While many cluster resources need to be isolated, we concentrate on handling one crucial feature in Kubernetes hard multi-tenancy: Network isolation. Our solution for this problem is intended to work regardless of the implementation flexibility of the Kubernetes network. The solution can also pass most of our security tests. The remaining issues are not significant, and one of them is solvable. Besides, our performance experiments recorded that this solution generated delays in cluster activities. However, in most cases, this delay is noticeable but nevertheless acceptable. The proposed method can potentially be a part of real Kubernetes multi-tenant systems where network isolation is one of the essential requirements
Container description ontology for CaaS
[EN] Besides its classical three service models (IaaS, PaaS, and SaaS), container as a service (CaaS) has gained significant acceptance. It offers without the difficulty of high-performance challenges of traditional hypervisors deployable applications. As the adoption of containers is increasingly wide spreading, the use of tools to manage them across the infrastructure becomes a vital necessity. In this paper, we propose a conceptualisation of a domain ontology for the container description called CDO. CDO presents, in a detailed and equal manner, the functional and non-functional capabilities of containers, Dockers and container orchestration systems. In addition, we provide a framework that aims at simplifying the container management not only for the users but also for the cloud providers. In fact, this framework serves to populate CDO, help the users to deploy their application on a container orchestration system, and enhance interoperability between the cloud providers by providing migration service for deploying applications among different host platforms. Finally, the CDO effectiveness is demonstrated relying on a real case study on the deployment of a micro-service application over a containerised environment under a set of functional and non-functional requirements.K. Boukadi; M.a Rekik; J. Bernal Bernabe; Lloret, J. (2020). Container description ontology for CaaS. International Journal of Web and Grid Services (Online). 16(4):341-363. https://doi.org/10.1504/IJWGS.2020.11094434136316
- …