4 research outputs found
Chameleon: a heterogeneous and disaggregated accelerator system for retrieval-augmented language models
A Retrieval-Augmented Language Model (RALM) augments a generative language
model by retrieving context-specific knowledge from an external database. This
strategy facilitates impressive text generation quality even with smaller
models, thus reducing orders of magnitude of computational demands. However,
RALMs introduce unique system design challenges due to (a) the diverse workload
characteristics between LM inference and retrieval and (b) the various system
requirements and bottlenecks for different RALM configurations such as model
sizes, database sizes, and retrieval frequencies. We propose Chameleon, a
heterogeneous accelerator system that integrates both LM and retrieval
accelerators in a disaggregated architecture. The heterogeneity ensures
efficient acceleration of both LM inference and retrieval, while the
accelerator disaggregation enables the system to independently scale both types
of accelerators to fulfill diverse RALM requirements. Our Chameleon prototype
implements retrieval accelerators on FPGAs and assigns LM inference to GPUs,
with a CPU server orchestrating these accelerators over the network. Compared
to CPU-based and CPU-GPU vector search systems, Chameleon achieves up to 23.72x
speedup and 26.2x energy efficiency. Evaluated on various RALMs, Chameleon
exhibits up to 2.16x reduction in latency and 3.18x speedup in throughput
compared to the hybrid CPU-GPU architecture. These promising results pave the
way for bringing accelerator heterogeneity and disaggregation into future RALM
systems
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Methods for carefully selecting or generating a small set of training data to
learn from, i.e., data pruning, coreset selection, and data distillation, have
been shown to be effective in reducing the ever-increasing cost of training
neural networks. Behind this success are rigorously designed strategies for
identifying informative training examples out of large datasets. However, these
strategies come with additional computational costs associated with subset
selection or data distillation before training begins, and furthermore, many
are shown to even under-perform random sampling in high data compression
regimes. As such, many data pruning, coreset selection, or distillation methods
may not reduce 'time-to-accuracy', which has become a critical efficiency
measure of training deep neural networks over large datasets. In this work, we
revisit a powerful yet overlooked random sampling strategy to address these
challenges and introduce an approach called Repeated Sampling of Random Subsets
(RSRS or RS2), where we randomly sample the subset of training data for each
epoch of model training. We test RS2 against thirty state-of-the-art data
pruning and data distillation methods across four datasets including ImageNet.
Our results demonstrate that RS2 significantly reduces time-to-accuracy
compared to existing techniques. For example, when training on ImageNet in the
high-compression regime (using less than 10% of the dataset each epoch), RS2
yields accuracy improvements up to 29% compared to competing pruning methods
while offering a runtime reduction of 7x. Beyond the above meta-study, we
provide a convergence analysis for RS2 and discuss its generalization
capability. The primary goal of our work is to establish RS2 as a competitive
baseline for future data selection or distillation techniques aimed at
efficient training