20,602 research outputs found
Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications
Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications
MHP-VOS: Multiple Hypotheses Propagation for Video Object Segmentation
We address the problem of semi-supervised video object segmentation (VOS),
where the masks of objects of interests are given in the first frame of an
input video. To deal with challenging cases where objects are occluded or
missing, previous work relies on greedy data association strategies that make
decisions for each frame individually. In this paper, we propose a novel
approach to defer the decision making for a target object in each frame, until
a global view can be established with the entire video being taken into
consideration. Our approach is in the same spirit as Multiple Hypotheses
Tracking (MHT) methods, making several critical adaptations for the VOS
problem. We employ the bounding box (bbox) hypothesis for tracking tree
formation, and the multiple hypotheses are spawned by propagating the preceding
bbox into the detected bbox proposals within a gated region starting from the
initial object mask in the first frame. The gated region is determined by a
gating scheme which takes into account a more comprehensive motion model rather
than the simple Kalman filtering model in traditional MHT. To further design
more customized algorithms tailored for VOS, we develop a novel mask
propagation score instead of the appearance similarity score that could be
brittle due to large deformations. The mask propagation score, together with
the motion score, determines the affinity between the hypotheses during tree
pruning. Finally, a novel mask merging strategy is employed to handle mask
conflicts between objects. Extensive experiments on challenging datasets
demonstrate the effectiveness of the proposed method, especially in the case of
object missing.Comment: accepted to CVPR 2019 as oral presentatio
A Comprehensive Performance Evaluation of Deformable Face Tracking "In-the-Wild"
Recently, technologies such as face detection, facial landmark localisation
and face recognition and verification have matured enough to provide effective
and efficient solutions for imagery captured under arbitrary conditions
(referred to as "in-the-wild"). This is partially attributed to the fact that
comprehensive "in-the-wild" benchmarks have been developed for face detection,
landmark localisation and recognition/verification. A very important technology
that has not been thoroughly evaluated yet is deformable face tracking
"in-the-wild". Until now, the performance has mainly been assessed
qualitatively by visually assessing the result of a deformable face tracking
technology on short videos. In this paper, we perform the first, to the best of
our knowledge, thorough evaluation of state-of-the-art deformable face tracking
pipelines using the recently introduced 300VW benchmark. We evaluate many
different architectures focusing mainly on the task of on-line deformable face
tracking. In particular, we compare the following general strategies: (a)
generic face detection plus generic facial landmark localisation, (b) generic
model free tracking plus generic facial landmark localisation, as well as (c)
hybrid approaches using state-of-the-art face detection, model free tracking
and facial landmark localisation technologies. Our evaluation reveals future
avenues for further research on the topic.Comment: E. Antonakos and P. Snape contributed equally and have joint second
authorshi
- …