197 research outputs found
In Datacenter Performance, The Only Constant Is Change
All computing infrastructure suffers from performance variability, be it
bare-metal or virtualized. This phenomenon originates from many sources: some
transient, such as noisy neighbors, and others more permanent but sudden, such
as changes or wear in hardware, changes in the underlying hypervisor stack, or
even undocumented interactions between the policies of the computing resource
provider and the active workloads. Thus, performance measurements obtained on
clouds, HPC facilities, and, more generally, datacenter environments are almost
guaranteed to exhibit performance regimes that evolve over time, which leads to
undesirable nonstationarities in application performance. In this paper, we
present our analysis of performance of the bare-metal hardware available on the
CloudLab testbed where we focus on quantifying the evolving performance regimes
using changepoint detection. We describe our findings, backed by a dataset with
nearly 6.9M benchmark results collected from over 1600 machines over a period
of 2 years and 9 months. These findings yield a comprehensive characterization
of real-world performance variability patterns in one computing facility, a
methodology for studying such patterns on other infrastructures, and contribute
to a better understanding of performance variability in general.Comment: To be presented at the 20th IEEE/ACM International Symposium on
Cluster, Cloud and Internet Computing (CCGrid,
http://cloudbus.org/ccgrid2020/) on May 11-14, 2020 in Melbourne, Victoria,
Australi
Container Resource Allocation versus Performance of Data-intensive Applications on Different Cloud Servers
In recent years, data-intensive applications have been increasingly deployed
on cloud systems. Such applications utilize significant compute, memory, and
I/O resources to process large volumes of data. Optimizing the performance and
cost-efficiency for such applications is a non-trivial problem. The problem
becomes even more challenging with the increasing use of containers, which are
popular due to their lower operational overheads and faster boot speed at the
cost of weaker resource assurances for the hosted applications. In this paper,
two containerized data-intensive applications with very different performance
objectives and resource needs were studied on cloud servers with Docker
containers running on Intel Xeon E5 and AMD EPYC Rome multi-core processors
with a range of CPU, memory, and I/O configurations. Primary findings from our
experiments include: 1) Allocating multiple cores to a compute-intensive
application can improve performance, but only if the cores do not contend for
the same caches, and the optimal core counts depend on the specific workload;
2) allocating more memory to a memory-intensive application than its
deterministic data workload does not further improve performance; however, 3)
having multiple such memory-intensive containers on the same server can lead to
cache and memory bus contention leading to significant and volatile performance
degradation. The comparative observations on Intel and AMD servers provided
insights into trade-offs between larger numbers of distributed chiplets
interconnected with higher speed buses (AMD) and larger numbers of centrally
integrated cores and caches with lesser speed buses (Intel). For the two types
of applications studied, the more distributed caches and faster data buses have
benefited the deployment of larger numbers of containers
Live migration of virtual machine and container based mobile core network components: A comprehensive study
With the increasing demand for openness, flexibility, and monetization, the Network Function Virtualization (NFV) of mobile network functions has become the embracing factor for most mobile network operators. Early reported field deployments of virtualized Evolved Packet Core (EPC) - the core network (CN) component of 4G LTE and 5G non-standalone mobile networks - reflect this growing trend. To best meet the requirements of power management, load balancing, and fault tolerance in the cloud environment, the need for live migration of these virtualized components cannot be shunned. Virtualization platforms of interest include both Virtual Machines (VMs) and Containers, with the latter option offering more lightweight characteristics. This paper's first contribution is the proposal of a framework that enables migration of containerised virtual EPC components using an open-source migration solution which does not fully support the mobile network protocol stack yet. The second contribution is an experimental-based comprehensive analysis of live migration in two virtualization technologies - VM and Container - with the additional scrutinization on the container migration approach. The presented experimental comparison accounts for several system parameters and configurations: flavor (image) size, network characteristics, processor hardware architecture model, and the CPU load of the backhaul network components. The comparison reveals that the live migration completion time and also the end-user service interruption time of the virtualized EPC components is reduced approximately by 70% in the container platform when using the proposed framework.This work was supported in part by the NSF under Grant CNS-1405405, Grant CNS-1409849, Grant ACI-1541461, and Grant CNS-1531039T; and in part by the EU Commission through the 5GROWTH Project under Grant 856709
Recommended from our members
AN EVALUATION OF SDN AND NFV SUPPORT FOR PARALLEL, ALTERNATIVE PROTOCOL STACK OPERATIONS IN FUTURE INTERNETS
Virtualization on top of high-performance servers has enabled the virtualization of network functions like caching, deep packet inspection, etc. Such Network Function Virtualization (NFV) is used to dynamically adapt to changes in network traffic and application popularity. We demonstrate how the combination of Software Defined Networking (SDN) and NFV can support the parallel operation of different Internet architectures on top of the same physical hardware. We introduce our architecture for this approach in an actual test setup, using CloudLab resources. We start of our evaluation in a small setup where we evaluate the feasibility of the SDN and NFV architecture and incrementally increase the complexity of the setup to run a live video streaming application. We use two vastly different protocol stacks, namely TCP/IP and NDN to demonstrate the capability of our approach. The evaluation of our approach shows that it introduces a new level of flexibility when it comes to operation of different Internet architectures on top of the same physical network and with this flexibility provides the ability to switch between the two protocol stacks depending on the application
- …