Video Outpainting using Conditional Generative Adverarial Networks

Abstract

Recent advancements in machine learning and neural networks have pushed the boundaries of what computers can achieve. Generative adversarial networks are a specific type of neural network that have proved wildly successful at content generation tasks. With this success, filling in missing sections of images or videos became a research topic of interest. Research in video inpainting has made steady progress throughout the years focusing on filling missing content in the center of a frame while research on video outpainting, which focuses on filling missing sections on the edge of the frame, has not. This thesis focuses on outpainting research by using conditional generative adversarial networks (cGANs) which apply a condition, such as an input image, to a generative adversarial network (GAN) in order to reformat traditional 4:3 video into a modern 16:9 format. This is accomplished by using a cGAN typically used for image-to-image translation and adapting it to generate the missing content from video frames. Although generated frames are not capable of accurately reconstructing missing content, this process is capable of producing context aware video that many times seamlessly blends with the original frame. The results of this research provide a glimpse of the possibility of using conditional generative adversarial networks for video outpainting

    Similar works