1 research outputs found
E-VFIA : Event-Based Video Frame Interpolation with Attention
Video frame interpolation (VFI) is a fundamental vision task that aims to
synthesize several frames between two consecutive original video images. Most
algorithms aim to accomplish VFI by using only keyframes, which is an ill-posed
problem since the keyframes usually do not yield any accurate precision about
the trajectories of the objects in the scene. On the other hand, event-based
cameras provide more precise information between the keyframes of a video. Some
recent state-of-the-art event-based methods approach this problem by utilizing
event data for better optical flow estimation to interpolate for video frame by
warping. Nonetheless, those methods heavily suffer from the ghosting effect. On
the other hand, some of kernel-based VFI methods that only use frames as input,
have shown that deformable convolutions, when backed up with transformers, can
be a reliable way of dealing with long-range dependencies. We propose
event-based video frame interpolation with attention (E-VFIA), as a lightweight
kernel-based method. E-VFIA fuses event information with standard video frames
by deformable convolutions to generate high quality interpolated frames. The
proposed method represents events with high temporal resolution and uses a
multi-head self-attention mechanism to better encode event-based information,
while being less vulnerable to blurring and ghosting artifacts; thus,
generating crispier frames. The simulation results show that the proposed
technique outperforms current state-of-the-art methods (both frame and
event-based) with a significantly smaller model size.Comment: Submitted to 2023 IEEE International Conference on Robotics and
Automation (ICRA 2023