8,067 research outputs found
Video Frame Interpolation via Adaptive Separable Convolution
Standard video frame interpolation methods first estimate optical flow
between input frames and then synthesize an intermediate frame guided by
motion. Recent approaches merge these two steps into a single convolution
process by convolving input frames with spatially adaptive kernels that account
for motion and re-sampling simultaneously. These methods require large kernels
to handle large motion, which limits the number of pixels whose kernels can be
estimated at once due to the large memory demand. To address this problem, this
paper formulates frame interpolation as local separable convolution over input
frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D
kernels require significantly fewer parameters to be estimated. Our method
develops a deep fully convolutional neural network that takes two input frames
and estimates pairs of 1D kernels for all pixels simultaneously. Since our
method is able to estimate kernels and synthesizes the whole video frame at
once, it allows for the incorporation of perceptual loss to train the neural
network to produce visually pleasing frames. This deep neural network is
trained end-to-end using widely available video data without any human
annotation. Both qualitative and quantitative experiments show that our method
provides a practical solution to high-quality video frame interpolation.Comment: ICCV 2017, http://graphics.cs.pdx.edu/project/sepconv
Joint Video Multi-Frame Interpolation and Deblurring under Unknown Exposure Time
Natural videos captured by consumer cameras often suffer from low framerate
and motion blur due to the combination of dynamic scene complexity, lens and
sensor imperfection, and less than ideal exposure setting. As a result,
computational methods that jointly perform video frame interpolation and
deblurring begin to emerge with the unrealistic assumption that the exposure
time is known and fixed. In this work, we aim ambitiously for a more realistic
and challenging task - joint video multi-frame interpolation and deblurring
under unknown exposure time. Toward this goal, we first adopt a variant of
supervised contrastive learning to construct an exposure-aware representation
from input blurred frames. We then train two U-Nets for intra-motion and
inter-motion analysis, respectively, adapting to the learned exposure
representation via gain tuning. We finally build our video reconstruction
network upon the exposure and motion representation by progressive
exposure-adaptive convolution and motion refinement. Extensive experiments on
both simulated and real-world datasets show that our optimized method achieves
notable performance gains over the state-of-the-art on the joint video x8
interpolation and deblurring task. Moreover, on the seemingly implausible x16
interpolation task, our method outperforms existing methods by more than 1.5 dB
in terms of PSNR.Comment: Accepted by CVPR 2023, available at
https://github.com/shangwei5/VIDU
- …