2 research outputs found
Robust temporal depth enhancement method for dynamic virtual view synthesis
Depth-image-based rendering (DIBR) is a view synthesis technique that generates virtual views by warping from
the reference images based on depth maps. The quality of synthesized views highly depends on the accuracy of
depth maps. However, for dynamic scenarios, depth sequences obtained through stereo matching methods frame
by frame can be temporally inconsistent, especially in static regions, which leads to uncomfortable flickering
artifacts in synthesized videos. This problem can be eliminated by depth enhancement methods that perform
temporal filtering to suppress depth inconsistency, yet those methods may also spread depth errors. Although these
depth enhancement algorithms increase the temporal consistency of synthesized videos, they have the risk of
reducing the quality of rendered videos. Since conventional methods may not achieve both properties, in this paper,
we present for static regions a robust temporal depth enhancement (RTDE) method, which propagates exactly the
reliable depth values into succeeding frames to upgrade not only the accuracy but also the temporal consistency
of depth estimations. This technique benefits the quality of synthesized videos. In addition we propose a novel
evaluation metric to quantitatively compare temporal consistency between our method and the state of arts.
Experimental results demonstrate the robustness of our method for dynamic virtual view synthesis, not only the
temporal consistency but also the quality of synthesized videos in static regions are improved