Rendering high-resolution images in real-time applications (e.g., video
games, virtual reality) is time-consuming, thus super-resolution technology
becomes more and more crucial in real-time rendering. However, it is still
challenging to preserve sharp texture details, keep the temporal stability and
avoid the ghosting artifacts in the real-time rendering super-resolution. To
this end, we introduce radiance demodulation into real-time rendering
super-resolution, separating the rendered image or radiance into a lighting
component and a material component, due to the fact that the light component
tends to be smoother than the rendered image and the high-resolution material
component with detailed textures can be easily obtained. Therefore, we perform
the super-resolution only on the lighting component and re-modulate with the
high-resolution material component to obtain the final super-resolution image.
In this way, the texture details can be preserved much better. Then, we propose
a reliable warping module by explicitly pointing out the unreliable occluded
regions with a motion mask to remove the ghosting artifacts. We further enhance
the temporal stability by designing a frame-recurrent neural network to
aggregate the previous and current frames, which better captures the
spatial-temporal correlation between reconstructed frames. As a result, our
method is able to produce temporally stable results in real-time rendering with
high-quality details, even in the highly challenging 4 × 4
super-resolution scenarios