Deploying MPI applications in a cluster of Xen VMs: A Networking Perspective

Abstract

Nowadays, seeking optimized data paths that can increase I/O throughput in Virtualized environments is an intriguing task, especially in a high performance computing context. We try to address this issue by evaluating methods for optimized network device access using scientific applications and microbenchmarks. We examine the network performance bottlenecks that appear in a Cluster of Xen VMs using both generic and intelligent network adapters. We study the network behavior of MPI applications. Our goal is to: (a) explore the implications of alternative data paths (direct or indirect) between applications and network hardware and (b) specify optimized solutions for scientific applications that put preasure on network devices. Preliminary results show that a combination of these techniques is essential for scientific applications to achieve nearnative performance in VM environments. Network Configuration (BRIDGED) Figure 1: BRIDGED Configuration This is the default configuration provided by Xen. All guest VMs share a common bridge, setup by the privileged guest. Data flow from applications to the privileged guest via copying or page flipping and thus the software bridge becomes a bottleneck in an HPC context

    Similar works

    Full text

    thumbnail-image

    Available Versions