899 research outputs found
RGB-D Salient Object Detection: A Survey
Salient object detection (SOD), which simulates the human visual perception
system to locate the most attractive object(s) in a scene, has been widely
applied to various computer vision tasks. Now, with the advent of depth
sensors, depth maps with affluent spatial information that can be beneficial in
boosting the performance of SOD, can easily be captured. Although various RGB-D
based SOD models with promising performance have been proposed over the past
several years, an in-depth understanding of these models and challenges in this
topic remains lacking. In this paper, we provide a comprehensive survey of
RGB-D based SOD models from various perspectives, and review related benchmark
datasets in detail. Further, considering that the light field can also provide
depth maps, we review SOD models and popular benchmark datasets from this
domain as well. Moreover, to investigate the SOD ability of existing models, we
carry out a comprehensive evaluation, as well as attribute-based evaluation of
several representative RGB-D based SOD models. Finally, we discuss several
challenges and open directions of RGB-D based SOD for future research. All
collected models, benchmark datasets, source code links, datasets constructed
for attribute-based evaluation, and codes for evaluation will be made publicly
available at https://github.com/taozh2017/RGBDSODsurveyComment: 24 pages, 12 figures. Has been accepted by Computational Visual Medi
RXFOOD: Plug-in RGB-X Fusion for Object of Interest Detection
The emergence of different sensors (Near-Infrared, Depth, etc.) is a remedy
for the limited application scenarios of traditional RGB camera. The RGB-X
tasks, which rely on RGB input and another type of data input to resolve
specific problems, have become a popular research topic in multimedia. A
crucial part in two-branch RGB-X deep neural networks is how to fuse
information across modalities. Given the tremendous information inside RGB-X
networks, previous works typically apply naive fusion (e.g., average or max
fusion) or only focus on the feature fusion at the same scale(s). While in this
paper, we propose a novel method called RXFOOD for the fusion of features
across different scales within the same modality branch and from different
modality branches simultaneously in a unified attention mechanism. An Energy
Exchange Module is designed for the interaction of each feature map's energy
matrix, who reflects the inter-relationship of different positions and
different channels inside a feature map. The RXFOOD method can be easily
incorporated to any dual-branch encoder-decoder network as a plug-in module,
and help the original backbone network better focus on important positions and
channels for object of interest detection. Experimental results on RGB-NIR
salient object detection, RGB-D salient object detection, and RGBFrequency
image manipulation detection demonstrate the clear effectiveness of the
proposed RXFOOD.Comment: 10 page
Densely Deformable Efficient Salient Object Detection Network
Salient Object Detection (SOD) domain using RGB-D data has lately emerged
with some current models' adequately precise results. However, they have
restrained generalization abilities and intensive computational complexity. In
this paper, inspired by the best background/foreground separation abilities of
deformable convolutions, we employ them in our Densely Deformable Network
(DDNet) to achieve efficient SOD. The salient regions from densely deformable
convolutions are further refined using transposed convolutions to optimally
generate the saliency maps. Quantitative and qualitative evaluations using the
recent SOD dataset against 22 competing techniques show our method's efficiency
and effectiveness. We also offer evaluation using our own created
cross-dataset, surveillance-SOD (S-SOD), to check the trained models' validity
in terms of their applicability in diverse scenarios. The results indicate that
the current models have limited generalization potentials, demanding further
research in this direction. Our code and new dataset will be publicly available
at https://github.com/tanveer-hussain/EfficientSO
Explicit Attention-Enhanced Fusion for RGB-Thermal Perception Tasks
Recently, RGB-Thermal based perception has shown significant advances.
Thermal information provides useful clues when visual cameras suffer from poor
lighting conditions, such as low light and fog. However, how to effectively
fuse RGB images and thermal data remains an open challenge. Previous works
involve naive fusion strategies such as merging them at the input,
concatenating multi-modality features inside models, or applying attention to
each data modality. These fusion strategies are straightforward yet
insufficient. In this paper, we propose a novel fusion method named Explicit
Attention-Enhanced Fusion (EAEF) that fully takes advantage of each type of
data. Specifically, we consider the following cases: i) both RGB data and
thermal data, ii) only one of the types of data, and iii) none of them generate
discriminative features. EAEF uses one branch to enhance feature extraction for
i) and iii) and the other branch to remedy insufficient representations for
ii). The outputs of two branches are fused to form complementary features. As a
result, the proposed fusion method outperforms state-of-the-art by 1.6\% in
mIoU on semantic segmentation, 3.1\% in MAE on salient object detection, 2.3\%
in mAP on object detection, and 8.1\% in MAE on crowd counting. The code is
available at https://github.com/FreeformRobotics/EAEFNet
- …