Graph-based diffusion models have shown promising results in terms of
generating high-quality solutions to NP-complete (NPC) combinatorial
optimization (CO) problems. However, those models are often inefficient in
inference, due to the iterative evaluation nature of the denoising diffusion
process. This paper proposes to use progressive distillation to speed up the
inference by taking fewer steps (e.g., forecasting two steps ahead within a
single step) during the denoising process. Our experimental results show that
the progressively distilled model can perform inference 16 times faster with
only 0.019% degradation in performance on the TSP-50 dataset