1 research outputs found
On Optimal Caching and Model Multiplexing for Large Model Inference
Large Language Models (LLMs) and other large foundation models have achieved
noteworthy success, but their size exacerbates existing resource consumption
and latency challenges. In particular, the large-scale deployment of these
models is hindered by the significant resource requirements during inference.
In this paper, we study two approaches for mitigating these challenges:
employing a cache to store previous queries and learning a model multiplexer to
choose from an ensemble of models for query processing.
Theoretically, we provide an optimal algorithm for jointly optimizing both
approaches to reduce the inference cost in both offline and online tabular
settings. By combining a caching algorithm, namely Greedy Dual Size with
Frequency (GDSF) or Least Expected Cost (LEC), with a model multiplexer, we
achieve optimal rates in both offline and online settings. Empirically,
simulations show that the combination of our caching and model multiplexing
algorithms greatly improves over the baselines, with up to
improvement over the baseline when the ratio between the maximum cost and
minimum cost is . Experiments on real datasets show a
improvement in FLOPs over the baseline when the ratio for FLOPs is , and a
improvement in latency when the ratio for average latency is