3,485 research outputs found
RGB-T salient object detection via fusing multi-level CNN features
RGB-induced salient object detection has recently witnessed substantial progress, which is attributed to the superior feature learning capability of deep convolutional neural networks (CNNs). However, such detections suffer from challenging scenarios characterized by cluttered backgrounds, low-light conditions and variations in illumination. Instead of improving RGB based saliency detection, this paper takes advantage of the complementary benefits of RGB and thermal infrared images. Specifically, we propose a novel end-to-end network for multi-modal salient object detection, which turns the challenge of RGB-T saliency detection to a CNN feature fusion problem. To this end, a backbone network (e.g., VGG-16) is first adopted to extract the coarse features from each RGB or thermal infrared image individually, and then several adjacent-depth feature combination (ADFC) modules are designed to extract multi-level refined features for each single-modal input image, considering that features captured at different depths differ in semantic information and visual details. Subsequently, a multi-branch group fusion (MGF) module is employed to capture the cross-modal features by fusing those features from ADFC modules for a RGB-T image pair at each level. Finally, a joint attention guided bi-directional message passing (JABMP) module undertakes the task of saliency prediction via integrating the multi-level fused features from MGF modules. Experimental results on several public RGB-T salient object detection datasets demonstrate the superiorities of our proposed algorithm over the state-of-the-art approaches, especially under challenging conditions, such as poor illumination, complex background and low contrast
Global Context-Aware Progressive Aggregation Network for Salient Object Detection
Deep convolutional neural networks have achieved competitive performance in
salient object detection, in which how to learn effective and comprehensive
features plays a critical role. Most of the previous works mainly adopted
multiple level feature integration yet ignored the gap between different
features. Besides, there also exists a dilution process of high-level features
as they passed on the top-down pathway. To remedy these issues, we propose a
novel network named GCPANet to effectively integrate low-level appearance
features, high-level semantic features, and global context features through
some progressive context-aware Feature Interweaved Aggregation (FIA) modules
and generate the saliency map in a supervised way. Moreover, a Head Attention
(HA) module is used to reduce information redundancy and enhance the top layers
features by leveraging the spatial and channel-wise attention, and the Self
Refinement (SR) module is utilized to further refine and heighten the input
features. Furthermore, we design the Global Context Flow (GCF) module to
generate the global context information at different stages, which aims to
learn the relationship among different salient regions and alleviate the
dilution effect of high-level features. Experimental results on six benchmark
datasets demonstrate that the proposed approach outperforms the
state-of-the-art methods both quantitatively and qualitatively
- …