Shimmy: Accelerating Inter-Container Communication Through Shared Memory

Abstract

Cloud native technologies consisting of containers, microservices and service meshes bring thetraditional advantages of Cloud Computing like scalability, composability and rapid deployability to the IoT Edge. An application built on the microservices architecture relies on a collection of individual containerized components offering modular services via REST or gRPC interfaces over the network. Compared to a monolithic application, the level of data and control exchange between components of a microserivces application is several orders higher. Studies have shown that overheads caused by such intercontainer communication are a significant hurdle in achieving the sub-50ms latencies required for 5G enabled network Edges which are comprised of a much smaller compute cluster unlike the Cloud. In this research, we present Shimmy - a shared memory based communication interface for containers within a node, that is cleanly integrated into the Kubernetes orchestration architecture while offering significant acceleration for microservices. Our results show a consistent 3-4x latency improvement over UDP and TCP, 20x latency improvement over RabbitMQ while significantly reducing resource utilization.</p

    Similar works