13 research outputs found

    Context aware saliency map generation using semantic segmentation

    Full text link
    Saliency map detection, as a method for detecting important regions of an image, is used in many applications such as image classification and recognition. We propose that context detection could have an essential role in image saliency detection. This requires extraction of high level features. In this paper a saliency map is proposed, based on image context detection using semantic segmentation as a high level feature. Saliency map from semantic information is fused with color and contrast based saliency maps. The final saliency map is then generated. Simulation results for Pascal-voc11 image dataset show 99% accuracy in context detection. Also final saliency map produced by our proposed method shows acceptable results in detecting salient points.Comment: 5 pages, 7 figures, 2 table

    RGB-T salient object detection via fusing multi-level CNN features

    Get PDF
    RGB-induced salient object detection has recently witnessed substantial progress, which is attributed to the superior feature learning capability of deep convolutional neural networks (CNNs). However, such detections suffer from challenging scenarios characterized by cluttered backgrounds, low-light conditions and variations in illumination. Instead of improving RGB based saliency detection, this paper takes advantage of the complementary benefits of RGB and thermal infrared images. Specifically, we propose a novel end-to-end network for multi-modal salient object detection, which turns the challenge of RGB-T saliency detection to a CNN feature fusion problem. To this end, a backbone network (e.g., VGG-16) is first adopted to extract the coarse features from each RGB or thermal infrared image individually, and then several adjacent-depth feature combination (ADFC) modules are designed to extract multi-level refined features for each single-modal input image, considering that features captured at different depths differ in semantic information and visual details. Subsequently, a multi-branch group fusion (MGF) module is employed to capture the cross-modal features by fusing those features from ADFC modules for a RGB-T image pair at each level. Finally, a joint attention guided bi-directional message passing (JABMP) module undertakes the task of saliency prediction via integrating the multi-level fused features from MGF modules. Experimental results on several public RGB-T salient object detection datasets demonstrate the superiorities of our proposed algorithm over the state-of-the-art approaches, especially under challenging conditions, such as poor illumination, complex background and low contrast

    Salient Object Detection With Importance Degree

    Get PDF
    In this article, we introduce salient object detection with importance degree (SOD-ID), which is a generalized technique for salient object detection (SOD), and propose an SOD-ID method. We define SOD-ID as a technique that detects salient objects and estimates their importance degree values. Hence, it is more effective for some image applications than SOD, which is shown via examples. The definition, evaluation procedure, and data collection for SOD-ID are introduced and discussed, and we propose its evaluation metric and data preparation, whose validity is discussed with the simulation results. Moreover, we propose an SOD-ID method, which consists of three technical blocks: instance segmentation, saliency detection, and importance degree estimation. The saliency detection block is proposed based on a convolutional neural network using the results of the instance segmentation block. The importance degree estimation block is achieved using the results of the other blocks. The proposed method accurately suppresses inaccurate saliencies and estimates the importance degree for multi-object images. In the simulations, the proposed method outperformed state-of-the-art methods with respect to the F-measure for SOD; and Spearman\u27s and Kendall rank correlation coefficients, and the proposed metric for SOD-ID
    corecore