In up-to-date machine learning (ML) applications on cloud or edge computing
platforms, batching is an important technique for providing efficient and
economical services at scale. In particular, parallel computing resources on
the platforms, such as graphics processing units (GPUs), have higher
computational and energy efficiency with larger batch sizes. However, larger
batch sizes may also result in longer response time, and thus it requires a
judicious design. This paper aims to provide a dynamic batching policy that
strikes a balance between efficiency and latency. The GPU-based inference
service is modeled as a batch service queue with batch-size dependent
processing time. Then, the design of dynamic batching is a continuous-time
average-cost problem, and is formulated as a semi-Markov decision process
(SMDP) with the objective of minimizing the weighted sum of average response
time and average power consumption. The optimal policy is acquired by solving
an associated discrete-time Markov decision process (MDP) problem with finite
state approximation and "discretization". By introducing an abstract cost to
reflect the impact of "tail" states, the space complexity and the time
complexity of the procedure can decrease by 63.5% and 98%, respectively. Our
results show that the optimal policies potentially possess a control limit
structure. Numerical results also show that SMDP-based batching policies can
adapt to different traffic intensities and outperform other benchmark policies.
Furthermore, the proposed solution has notable flexibility in balancing power
consumption and latency.Comment: Accepted by 2023 IEEE International Conference on Communications
(ICC