3,678 research outputs found
Deep Video Color Propagation
Traditional approaches for color propagation in videos rely on some form of
matching between consecutive video frames. Using appearance descriptors, colors
are then propagated both spatially and temporally. These methods, however, are
computationally expensive and do not take advantage of semantic information of
the scene. In this work we propose a deep learning framework for color
propagation that combines a local strategy, to propagate colors frame-by-frame
ensuring temporal stability, and a global strategy, using semantics for color
propagation within a longer range. Our evaluation shows the superiority of our
strategy over existing video and image color propagation methods as well as
neural photo-realistic style transfer approaches.Comment: BMVC 201
Object Detection in Videos with Tubelet Proposal Networks
Object detection in videos has drawn increasing attention recently with the
introduction of the large-scale ImageNet VID dataset. Different from object
detection in static images, temporal information in videos is vital for object
detection. To fully utilize temporal information, state-of-the-art methods are
based on spatiotemporal tubelets, which are essentially sequences of associated
bounding boxes across time. However, the existing methods have major
limitations in generating tubelets in terms of quality and efficiency.
Motion-based methods are able to obtain dense tubelets efficiently, but the
lengths are generally only several frames, which is not optimal for
incorporating long-term temporal information. Appearance-based methods, usually
involving generic object tracking, could generate long tubelets, but are
usually computationally expensive. In this work, we propose a framework for
object detection in videos, which consists of a novel tubelet proposal network
to efficiently generate spatiotemporal proposals, and a Long Short-term Memory
(LSTM) network that incorporates temporal information from tubelet proposals
for achieving high object detection accuracy in videos. Experiments on the
large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed
framework for object detection in videos.Comment: CVPR 201
Two Decades of Colorization and Decolorization for Images and Videos
Colorization is a computer-aided process, which aims to give color to a gray
image or video. It can be used to enhance black-and-white images, including
black-and-white photos, old-fashioned films, and scientific imaging results. On
the contrary, decolorization is to convert a color image or video into a
grayscale one. A grayscale image or video refers to an image or video with only
brightness information without color information. It is the basis of some
downstream image processing applications such as pattern recognition, image
segmentation, and image enhancement. Different from image decolorization, video
decolorization should not only consider the image contrast preservation in each
video frame, but also respect the temporal and spatial consistency between
video frames. Researchers were devoted to develop decolorization methods by
balancing spatial-temporal consistency and algorithm efficiency. With the
prevalance of the digital cameras and mobile phones, image and video
colorization and decolorization have been paid more and more attention by
researchers. This paper gives an overview of the progress of image and video
colorization and decolorization methods in the last two decades.Comment: 12 pages, 19 figure
Multi-Cue Structure Preserving MRF for Unconstrained Video Segmentation
Video segmentation is a stepping stone to understanding video context. Video
segmentation enables one to represent a video by decomposing it into coherent
regions which comprise whole or parts of objects. However, the challenge
originates from the fact that most of the video segmentation algorithms are
based on unsupervised learning due to expensive cost of pixelwise video
annotation and intra-class variability within similar unconstrained video
classes. We propose a Markov Random Field model for unconstrained video
segmentation that relies on tight integration of multiple cues: vertices are
defined from contour based superpixels, unary potentials from temporal smooth
label likelihood and pairwise potentials from global structure of a video.
Multi-cue structure is a breakthrough to extracting coherent object regions for
unconstrained videos in absence of supervision. Our experiments on VSB100
dataset show that the proposed model significantly outperforms competing
state-of-the-art algorithms. Qualitative analysis illustrates that video
segmentation result of the proposed model is consistent with human perception
of objects
- …