Generating texts with a large language model (LLM) consumes massive amounts
of memory. Apart from the already-large model parameters, the key/value (KV)
cache that holds information about previous tokens in a sequence can grow to be
even larger than the model itself. This problem is exacerbated in one of the
current LLM serving frameworks which reserves the maximum sequence length of
memory for the KV cache to guarantee generating a complete sequence as they do
not know the output sequence length. This restricts us to use a smaller batch
size leading to lower GPU utilization and above all, lower throughput. We argue
that designing a system with a priori knowledge of the output sequence can
mitigate this problem. To this end, we propose S3, which predicts the
output sequence length, schedules generation queries based on the prediction to
increase device resource utilization and throughput, and handle mispredictions.
Our proposed method achieves 6.49× throughput over those systems that
assume the worst case for the output sequence length