5 research outputs found
Recommended from our members
Accelerating virtualization of accelerators
The use of specialized accelerators is among the most promising paths to better energy efficiency for computationally heavy workloads. However, current software and system support for accelerators is limited, and no production-ready solutions have yet been provided for accelerators to be efficiently accessed or shared in domains such as cloud infrastructure and kernel space. Complex hardware and proprietary software stacks inhibit efficient accelerator virtualization. We observe that practical virtualization has to choose between interposition at the topmost (user API) and bottom-most (hardware) interfaces, and virtualization based on interposing intermediate stack layers is impractical.
Based on these observations, this thesis first presents AvA (Accelerated Virtualization of Accelerators) which exposes practical virtual accelerators in the cloud with strong virtualization properties such as isolation, compatibility, and consolidation. AvA is the first system to show general techniques for API remoting that retain both hypervisor interposition and close-to-native performance, and is the first system for automatic construction of virtual accelerator stacks with hypervisor mediation for arbitrary accelerators. We used AvA to virtualize nine accelerators and eleven framework APIs, with orders-of-magnitude lower programming effort than required to construct hand-built virtualization support. These accelerators include seven for which no virtualization support has been previously explored.
Building on AvA, this thesis presents Akatha (Accelerating Kernel Access to Hardware Acceleration), which uses automation to reduce developer effort in building efficient access to accelerators for kernel-level work (e.g., FS encryption or packet processing). Akatha constructs API-remoting-based kernel accelerator stacks with code generation, leveraging kernel knowledge unavailable in user space to improve performance and resource management. This includes transparently modifying virtual memory mappings to avoid data transfer between kernel and user space, and providing a framework and mechanisms to manage contention between user and kernel for accelerator devices. We evaluated Akatha with a range of workloads, showing promising opportunities for OS acceleration.Computer Science
Artifacts of AvA (Accelerated Virtualization of Accelerators) in ASPLOS'20
This is the artifacts of the paper titled "AvA: Accelerated Virtualization of Accelerators" which will appear in ASPLOS'20.
Abstract:
Applications are migrating en masse to the cloud, while accelerators such as GPUs, TPUs, and FPGAs proliferate in the wake of Moore's Law. These trends are in conflict: cloud applications run on virtual platforms, but existing virtualization techniques have not provided production-ready solutions for accelerators. As a result, cloud providers expose accelerators by dedicating physical devices to individual guests. Multi-tenancy and consolidation are lost as a consequence.
We propose automatic generation of virtual accelerator stacks to address the fundamental limitations of existing virtualization techniques. AvA provides automated construction of support for hypervisor-mediated accelerator sharing among mutually distrustful VMs. AvA combines a DSL for describing accelerator APIs and sharing policies, a device-agnostic runtime, and tools to generate and deploy accelerator-specific stack components such as guest libraries and API servers. AvA uses a novel technique called Hypervisor Interposed Remote Acceleration (HIRA) that retains hypervisor interposition for efficient policy enforcement.
We used AvA to virtualize ten accelerators and framework APIs, including six for which no virtualization support has been previously explored. Our evaluation shows that AvA can provide near-native performance and enforce resource sharing policies that are not possible with current techniques such as SR-IOV and user-level API remoting, all with orders of magnitude lower programming effort than required to construct hand-built virtualization support.The project is actively maintained in https://github.com/utcs-scea/ava