142 research outputs found
SpotServe: Serving Generative Large Language Models on Preemptible Instances
The high computational and memory requirements of generative large language
models (LLMs) make it challenging to serve them cheaply. This paper aims to
reduce the monetary cost for serving LLMs by leveraging preemptible GPU
instances on modern clouds, which offer accesses to spare GPUs at a much
cheaper price than regular instances but may be preempted by the cloud at any
time. Serving LLMs on preemptible instances requires addressing challenges
induced by frequent instance preemptions and the necessity of migrating
instances to handle these preemptions.
This paper presents SpotServe, the first distributed LLM serving system on
preemptible instances. Several key techniques in SpotServe realize fast and
reliable serving of generative LLMs on cheap preemptible instances. First,
SpotServe dynamically adapts the LLM parallelization configuration for dynamic
instance availability and fluctuating workload, while balancing the trade-off
among the overall throughput, inference latency and monetary costs. Second, to
minimize the cost of migrating instances for dynamic reparallelization, the
task of migrating instances is formulated as a bipartite graph matching
problem, which uses the Kuhn-Munkres algorithm to identify an optimal migration
plan that minimizes communications. Finally, to take advantage of the grace
period offered by modern clouds, we introduce stateful inference recovery, a
new inference mechanism that commits inference progress at a much finer
granularity and allows SpotServe to cheaply resume inference upon preemption.
We evaluate on real spot instance preemption traces and various popular LLMs
and show that SpotServe can reduce the P99 tail latency by 2.4 - 9.1x compared
with the best existing LLM serving systems. We also show that SpotServe can
leverage the price advantage of preemptive instances, saving 54% monetary cost
compared with only using on-demand instances.Comment: ASPLOS 202
Towards Fast, Adaptive, and Hardware-Assisted User-Space Scheduling
Modern datacenter applications are prone to high tail latencies since their
requests typically follow highly-dispersive distributions. Delivering fast
interrupts is essential to reducing tail latency. Prior work has proposed both
OS- and system-level solutions to reduce tail latencies for microsecond-scale
workloads through better scheduling. Unfortunately, existing approaches like
customized dataplane OSes, require significant OS changes, experience
scalability limitations, or do not reach the full performance capabilities
hardware offers.
The emergence of new hardware features like UINTR exposed new opportunities
to rethink the design paradigms and abstractions of traditional scheduling
systems. We propose LibPreemptible, a preemptive user-level threading library
that is flexible, lightweight, and adaptive. LibPreemptible was built with a
set of optimizations like LibUtimer for scalability, and deadline-oriented API
for flexible policies, time-quantum controller for adaptiveness. Compared to
the prior state-of-the-art scheduling system Shinjuku, our system achieves
significant tail latency and throughput improvements for various workloads
without modifying the kernel. We also demonstrate the flexibility of
LibPreemptible across scheduling policies for real applications experiencing
varying load levels and characteristics.Comment: Accepted by HPCA202
Cloud Cost Optimization: A Comprehensive Review of Strategies and Case Studies
Cloud computing has revolutionized the way organizations manage their IT
infrastructure, but it has also introduced new challenges, such as managing
cloud costs. This paper explores various techniques for cloud cost
optimization, including cloud pricing, analysis, and strategies for resource
allocation. Real-world case studies of these techniques are presented, along
with a discussion of their effectiveness and key takeaways. The analysis
conducted in this paper reveals that organizations can achieve significant cost
savings by adopting cloud cost optimization techniques. Additionally, future
research directions are proposed to advance the state of the art in this
important field
Digital libraries on an iPod: Beyond the client-server model
This paper describes an experimental system that enhanced an iPod with digital library capabilities. Using the open source digital library software Greenstone as a base, this paper more specifically maps out the technical steps necessary to achieve this, along with an account of our subsequent experimentation. This included command-line usage of Greenstone's basic runtime system on the device, augmenting the iPodâs main interactive menu-driven application to include searching and hierarchical browsing of digital library collections stored locally, and a selection of "launcher" applications for target documents such as text files, images and audio. Media rich applications for digital stories and collaging were also developed. We also configured the iPod to run as a web server to provide digital library content to others over a network, effectively turning the traditional mobile client-server upsidedown
- âŠ