588 research outputs found

    Continuous and Concurrent Network Connection for Hardware Virtualization

    Get PDF
    This project addresses the network connectivity in virtualization for cloud computing. Each Virtual Machine will be able to access the network concurrently and obtains continuous internet connectivity without any disruption. This project proposes a new method of resource sharing which is the Network Interface Card (NIC) among the Virtual Machines with each of them having the full access to it with near-native bandwidth. With this, could computing can perform resource allocation more effectively. This will be essential to migrate the each Operating System (Virtual Machine) that resides on one physical machine to another without disrupting its internet or network connection

    Resource Allocation Policy for Virtualized Network Interfaces

    Get PDF
    Over the last decade, virtualization has gained widespread importance. Virtual Machines (VMs) can now share network access in hardware, or in software or in a hybridized way. Input/Output (IO) virtualization technologies based on software utilize emulation technique, but this requires Virtualization Manager which presents central processing overhead in a significant amount. Besides, each IO operation in turn poses overhead additionally and any supported advanced capabilities inherent of physical hardware are not utilized properly. Some direct assignment based IO virtualization technologies suffer from limitations to scalability. The support for Quality of Service (QoS) may be offered within the software layers at the Virtualization Manager or Guest Operating System level which interact with the IO device that is being shared. With a preliminary investigation of the functionality of the RiceNIC (an open standard platform meant for research and education into concurrent network interface design), a study of the various network interface technologies supporting IO device virtualization was carried out to precisely understand IO virtualized network interfaces. The project describes a resource allocation policy for the on-device memory of the IO device being shared, taking the instance of a complex IO device, i.e., a Network Interface Controller(NIC) supporting a reconfigurable virtualized network interface architecture design which endures multiple reconfigurable virtualized network interfaces working independently using a reconfigurable partitioned memory. It enhances the scalability of the IO device

    Virtual InfiniBand Clusters for HPC Clouds

    Get PDF
    High Performance Computing (HPC) employs fast interconnect technologies to provide low communication and synchronization latencies for tightly coupled parallel compute jobs. Contemporary HPC clusters have a xed capacity and static runtime environments; they cannot elastically adapt to dynamic workloads, and provide a limited selection of applications, libraries, and system software. In contrast, a cloud model for HPC clusters promises more exibility, as it provides elastic virtual clusters to be available on-demand. This is not possible with physically owned clusters. In this paper, we present an approach that makes it possible to use InfiniBand clusters for HPC cloud computing. We propose a performance-driven design of an HPC IaaS layer for In niBand, which provides throughput and latency-aware virtualization of nodes, networks, and network topologies, as well as an approach to an HPC-aware, multi-tenant cloud management system for elastic virtualized HPC compute clusters

    LibrettOS: A Dynamically Adaptable Multiserver-Library OS

    Full text link
    We present LibrettOS, an OS design that fuses two paradigms to simultaneously address issues of isolation, performance, compatibility, failure recoverability, and run-time upgrades. LibrettOS acts as a microkernel OS that runs servers in an isolated manner. LibrettOS can also act as a library OS when, for better performance, selected applications are granted exclusive access to virtual hardware resources such as storage and networking. Furthermore, applications can switch between the two OS modes with no interruption at run-time. LibrettOS has a uniquely distinguishing advantage in that, the two paradigms seamlessly coexist in the same OS, enabling users to simultaneously exploit their respective strengths (i.e., greater isolation, high performance). Systems code, such as device drivers, network stacks, and file systems remain identical in the two modes, enabling dynamic mode switching and reducing development and maintenance costs. To illustrate these design principles, we implemented a prototype of LibrettOS using rump kernels, allowing us to reuse existent, hardened NetBSD device drivers and a large ecosystem of POSIX/BSD-compatible applications. We use hardware (VM) virtualization to strongly isolate different rump kernel instances from each other. Because the original rumprun unikernel targeted a much simpler model for uniprocessor systems, we redesigned it to support multicore systems. Unlike kernel-bypass libraries such as DPDK, applications need not be modified to benefit from direct hardware access. LibrettOS also supports indirect access through a network server that we have developed. Applications remain uninterrupted even when network components fail or need to be upgraded. Finally, to efficiently use hardware resources, applications can dynamically switch between the indirect and direct modes based on their I/O load at run-time. [full abstract is in the paper]Comment: 16th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE '20), March 17, 2020, Lausanne, Switzerlan

    SR-IOV in High Performance Computing

    Get PDF
    No abstract availabl

    Fast & Scalable I/O for Emulated HPUX

    Get PDF
    HPE has positioned containerized solution called c-UX (code named Kiran) which runs HPUX in emulated (Itanium hardware emulation on x86) mode as a futuristic solution for the margin rich UNIX business. The value of containerized HPUX is that it allows customers using legacy HPUX applications to continue running on x86 hardware. Significant effort has been expended to increase the effectiveness of hardware resource utilization on c-UX. The next step in fully optimizing I/O in c-UX environment is to provide truly scalable high-performance, by enabling a single I/O device to provide DMA for multiple VMs. This scalability challenge can be solved using Single Root I/O Virtualization (SR-IOV) technology, delivering near-native I/O performance for multiple c-UX instances, while also providing memory and traffic isolation for security and high availability, accelerating live migrations, and reducing the cost and complexity of I/O solutions. Network and Storage adapters from various vendors can be used to realize SR-IOV on c-UX, which otherwise was not possible on native HPUX due to hardware and firmware limitations. This paper talks about an innovative mechanism to enable SR-IOV on emulated HPUX OS using Virtual Function I/O framework (VFIO) available in Linux. Disclosed is an approach of achieving highly scalable performance in c-UX application by allowing the guest OS direct access to parts of the I/O subsystem of the host and handle various aspects of the communication like DMA and interrupts. It also throws light on the network I/O performance gains achieved using this method
    corecore