12 research outputs found
Dissecting Arbitrary-scale Super-resolution Capability from Pre-trained Diffusion Generative Models
Diffusion-based Generative Models (DGMs) have achieved unparalleled
performance in synthesizing high-quality visual content, opening up the
opportunity to improve image super-resolution (SR) tasks. Recent solutions for
these tasks often train architecture-specific DGMs from scratch, or require
iterative fine-tuning and distillation on pre-trained DGMs, both of which take
considerable time and hardware investments. More seriously, since the DGMs are
established with a discrete pre-defined upsampling scale, they cannot well
match the emerging requirements of arbitrary-scale super-resolution (ASSR),
where a unified model adapts to arbitrary upsampling scales, instead of
preparing a series of distinct models for each case. These limitations beg an
intriguing question: can we identify the ASSR capability of existing
pre-trained DGMs without the need for distillation or fine-tuning? In this
paper, we take a step towards resolving this matter by proposing Diff-SR, a
first ASSR attempt based solely on pre-trained DGMs, without additional
training efforts. It is motivated by an exciting finding that a simple
methodology, which first injects a specific amount of noise into the
low-resolution images before invoking a DGM's backward diffusion process,
outperforms current leading solutions. The key insight is determining a
suitable amount of noise to inject, i.e., small amounts lead to poor low-level
fidelity, while over-large amounts degrade the high-level signature. Through a
finely-grained theoretical analysis, we propose the Perceptual Recoverable
Field (PRF), a metric that achieves the optimal trade-off between these two
factors. Extensive experiments verify the effectiveness, flexibility, and
adaptability of Diff-SR, demonstrating superior performance to state-of-the-art
solutions under diverse ASSR environments