168 research outputs found
Light Field Salient Object Detection: A Review and Benchmark
Salient object detection (SOD) is a long-standing research topic in computer
vision and has drawn an increasing amount of research interest in the past
decade. This paper provides the first comprehensive review and benchmark for
light field SOD, which has long been lacking in the saliency community.
Firstly, we introduce preliminary knowledge on light fields, including theory
and data forms, and then review existing studies on light field SOD, covering
ten traditional models, seven deep learning-based models, one comparative
study, and one brief review. Existing datasets for light field SOD are also
summarized with detailed information and statistical analyses. Secondly, we
benchmark nine representative light field SOD models together with several
cutting-edge RGB-D SOD models on four widely used light field datasets, from
which insightful discussions and analyses, including a comparison between light
field SOD and RGB-D SOD models, are achieved. Besides, due to the inconsistency
of datasets in their current forms, we further generate complete data and
supplement focal stacks, depth maps and multi-view images for the inconsistent
datasets, making them consistent and unified. Our supplemental data makes a
universal benchmark possible. Lastly, because light field SOD is quite a
special problem attributed to its diverse data representations and high
dependency on acquisition hardware, making it differ greatly from other
saliency detection tasks, we provide nine hints into the challenges and future
directions, and outline several open issues. We hope our review and
benchmarking could help advance research in this field. All the materials
including collected models, datasets, benchmarking results, and supplemented
light field datasets will be publicly available on our project site
https://github.com/kerenfu/LFSOD-Survey
RGB-D Salient Object Detection: A Survey
Salient object detection (SOD), which simulates the human visual perception
system to locate the most attractive object(s) in a scene, has been widely
applied to various computer vision tasks. Now, with the advent of depth
sensors, depth maps with affluent spatial information that can be beneficial in
boosting the performance of SOD, can easily be captured. Although various RGB-D
based SOD models with promising performance have been proposed over the past
several years, an in-depth understanding of these models and challenges in this
topic remains lacking. In this paper, we provide a comprehensive survey of
RGB-D based SOD models from various perspectives, and review related benchmark
datasets in detail. Further, considering that the light field can also provide
depth maps, we review SOD models and popular benchmark datasets from this
domain as well. Moreover, to investigate the SOD ability of existing models, we
carry out a comprehensive evaluation, as well as attribute-based evaluation of
several representative RGB-D based SOD models. Finally, we discuss several
challenges and open directions of RGB-D based SOD for future research. All
collected models, benchmark datasets, source code links, datasets constructed
for attribute-based evaluation, and codes for evaluation will be made publicly
available at https://github.com/taozh2017/RGBDSODsurveyComment: 24 pages, 12 figures. Has been accepted by Computational Visual Medi
Rethinking RGB-D Salient Object Detection: Models, Data Sets, and Large-Scale Benchmarks
The use of RGB-D information for salient object detection has been
extensively explored in recent years. However, relatively few efforts have been
put towards modeling salient object detection in real-world human activity
scenes with RGBD. In this work, we fill the gap by making the following
contributions to RGB-D salient object detection. (1) We carefully collect a new
SIP (salient person) dataset, which consists of ~1K high-resolution images that
cover diverse real-world scenes from various viewpoints, poses, occlusions,
illuminations, and backgrounds. (2) We conduct a large-scale (and, so far, the
most comprehensive) benchmark comparing contemporary methods, which has long
been missing in the field and can serve as a baseline for future research. We
systematically summarize 32 popular models and evaluate 18 parts of 32 models
on seven datasets containing a total of about 97K images. (3) We propose a
simple general architecture, called Deep Depth-Depurator Network (D3Net). It
consists of a depth depurator unit (DDU) and a three-stream feature learning
module (FLM), which performs low-quality depth map filtering and cross-modal
feature learning respectively. These components form a nested structure and are
elaborately designed to be learned jointly. D3Net exceeds the performance of
any prior contenders across all five metrics under consideration, thus serving
as a strong model to advance research in this field. We also demonstrate that
D3Net can be used to efficiently extract salient object masks from real scenes,
enabling effective background changing application with a speed of 65fps on a
single GPU. All the saliency maps, our new SIP dataset, the D3Net model, and
the evaluation tools are publicly available at
https://github.com/DengPingFan/D3NetBenchmark.Comment: Accepted in TNNLS20. 15 pages, 12 figures. Code:
https://github.com/DengPingFan/D3NetBenchmar
RGBD Salient Object Detection via Deep Fusion
Numerous efforts have been made to design different low level saliency cues
for the RGBD saliency detection, such as color or depth contrast features,
background and color compactness priors. However, how these saliency cues
interact with each other and how to incorporate these low level saliency cues
effectively to generate a master saliency map remain a challenging problem. In
this paper, we design a new convolutional neural network (CNN) to fuse
different low level saliency cues into hierarchical features for automatically
detecting salient objects in RGBD images. In contrast to the existing works
that directly feed raw image pixels to the CNN, the proposed method takes
advantage of the knowledge in traditional saliency detection by adopting
various meaningful and well-designed saliency feature vectors as input. This
can guide the training of CNN towards detecting salient object more effectively
due to the reduced learning ambiguity. We then integrate a Laplacian
propagation framework with the learned CNN to extract a spatially consistent
saliency map by exploiting the intrinsic structure of the input image.
Extensive quantitative and qualitative experimental evaluations on three
datasets demonstrate that the proposed method consistently outperforms
state-of-the-art methods.Comment: This paper has been submitted to IEEE Transactions on Image
Processin
- …