61,220 research outputs found

    Detection of Salient Objects in Images Using Frequency Domain and Deep Convolutional Features

    Get PDF
    In image processing and computer vision tasks such as object of interest image segmentation, adaptive image compression, object based image retrieval, seam carving, and medical imaging, the cost of information storage and computational complexity is generally a great concern. Therefore, for these and other applications, identifying and focusing only on the parts of the image that are visually most informative is much desirable. These most informative parts or regions that also have more contrast with the rest of the image are called the salient regions of the image, and the process of identifying them is referred to as salient object detection. The main challenges in devising a salient object detection scheme are in extracting the image features that correctly differentiate the salient objects from the non-salient ones, and then utilizing them to detect the salient objects accurately. Several salient object detection methods have been developed in the literature using spatial domain image features. However, these methods generally cannot detect the salient objects uniformly or with clear boundaries between the salient and non-salient regions. This is due to the fact that in these methods, unnecessary frequency content of the image get retained or the useful ones from the original image get suppressed. Frequency domain features can address these limitations by providing a better representation of the image. Some salient object detection schemes have been developed based on the features extracted using the Fourier or Fourier like transforms. While these methods are more successful in detecting the entire salient object in images with small salient regions, in images with large salient regions these methods have a tendency to highlight the boundaries of the salient region rather than doing so for the entire salient region. This is due to the fact that in the Fourier transform of an image, the global contrast is more dominant than the local ones. Moreover, it is known that the Fourier transform cannot provide simultaneous spatial and frequency localization. It is known that multi-resolution feature extraction techniques can provide more accurate features for different image processing tasks, since features that might not get extracted at one resolution may be detected at another resolution. However, not much work has been done to employ multi-resolution feature extraction techniques for salient object detection. In view of this, the objective of this thesis is to develop schemes for image salient object detection using multi-resolution feature extraction techniques both in the frequency domain and the spatial domain. The first part of this thesis is concerned with developing salient object detection methods using multi-resolution frequency domain features. The wavelet transform has the ability of performing multi-resolution simultaneous spatial and frequency localized analysis, which makes it a better feature extraction tool compared to the Fourier or other Fourier like transforms. In this part of the thesis, first a salient object detection scheme is developed by extracting features from the high-pass coefficients of the wavelet decompositions of the three color channels of images, and devising a scheme for the weighted linear combination of the color channel features. Despite the advantages of the wavelet transform in image feature extraction, it is not very effective in capturing line discontinuities, which correspond to directional information in the image. In order to circumvent the lack of directional flexibility of the wavelet-based features, in this part of the thesis, another salient object detection scheme is also presented by extracting local and global features from the non-subsampled contourlet coefficients of the image color channels. The local features are extracted from the local variations of the low-pass coefficients, whereas the global features are obtained based on the distribution of the subband coefficients afforded by the directional flexibility provided by the non-subsampled contourlet transform. In the past few years, there has been a surge of interest in employing deep convolutional neural networks to extract image features for different applications. These networks provide a platform for automatically extracting low-level appearance features and high-level semantic features at different resolutions from the raw images. The second part of this thesis is, therefore, concerned with the investigation of salient object detection using multiresolution deep convolutional features. The existing deep salient object detection schemes are based on the standard convolution. However, performing the standard convolution is computationally expensive specially when the number of channels increases through the layers of a deep network. In this part of the thesis, using a lightweight depthwise separable convolution, a deep salient object detection network that exploits the fusion of multi-level and multi-resolution image features through judicious skip connections between the layers is developed. The proposed deep salient object detection network is aimed at providing good performance with a much reduced complexity compared to the existing deep salient object detection methods. Extensive experiments are conducted in order to evaluate the performance of the proposed salient object detection methods by applying them to the natural images from several datasets. It is shown that the performance of the proposed methods are superior to that of the existing methods of salient object detection

    Deep Saliency with Encoded Low level Distance Map and High Level Features

    Full text link
    Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene. These advances have demonstrated superior results over previous works that utilize hand-crafted low level features for saliency detection. In this paper, we demonstrate that hand-crafted features can provide complementary information to enhance performance of saliency detection that utilizes only high level features. Our method utilizes both high level and low level features for saliency detection under a unified deep learning framework. The high level features are extracted using the VGG-net, and the low level features are compared with other parts of an image to form a low level distance map. The low level distance map is then encoded using a convolutional neural network(CNN) with multiple 1X1 convolutional and ReLU layers. We concatenate the encoded low level distance map and the high level features, and connect them to a fully connected neural network classifier to evaluate the saliency of a query region. Our experiments show that our method can further improve the performance of state-of-the-art deep learning-based saliency detection methods.Comment: Accepted by IEEE Conference on Computer Vision and Pattern Recognition(CVPR) 2016. Project page: https://github.com/gylee1103/SaliencyEL

    Learning RGB-D Salient Object Detection using background enclosure, depth contrast, and top-down features

    Full text link
    Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7% better than the second place

    S4Net: Single Stage Salient-Instance Segmentation

    Full text link
    We consider an interesting problem-salient instance segmentation in this paper. Other than producing bounding boxes, our network also outputs high-quality instance-level segments. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also its surrounding context, enabling us to distinguish the instances in the same scope even with obstruction. Our network is end-to-end trainable and runs at a fast speed (40 fps when processing an image with resolution 320x320). We evaluate our approach on a publicly available benchmark and show that it outperforms other alternative solutions. We also provide a thorough analysis of the design choices to help readers better understand the functions of each part of our network. The source code can be found at \url{https://github.com/RuochenFan/S4Net}
    • …
    corecore