34 research outputs found

    Deep Saliency with Encoded Low level Distance Map and High Level Features

    Full text link
    Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene. These advances have demonstrated superior results over previous works that utilize hand-crafted low level features for saliency detection. In this paper, we demonstrate that hand-crafted features can provide complementary information to enhance performance of saliency detection that utilizes only high level features. Our method utilizes both high level and low level features for saliency detection under a unified deep learning framework. The high level features are extracted using the VGG-net, and the low level features are compared with other parts of an image to form a low level distance map. The low level distance map is then encoded using a convolutional neural network(CNN) with multiple 1X1 convolutional and ReLU layers. We concatenate the encoded low level distance map and the high level features, and connect them to a fully connected neural network classifier to evaluate the saliency of a query region. Our experiments show that our method can further improve the performance of state-of-the-art deep learning-based saliency detection methods.Comment: Accepted by IEEE Conference on Computer Vision and Pattern Recognition(CVPR) 2016. Project page: https://github.com/gylee1103/SaliencyEL

    SaliencyRank: Two-stage manifold ranking for salient object detection

    Get PDF

    Direction Selective Contour Detection for Salient Objects

    Get PDF
    The active contour model is a widely used technique for automatic object contour extraction. Existing methods based on this model can perform with high accuracy even in case of complex contours, but challenging issues remain, like the need for precise contour initialization for high curvature boundary segments or the handling of cluttered backgrounds. To deal with such issues, this paper presents a salient object extraction method, the first step of which is the introduction of an improved edge map that incorporates edge direction as a feature. The direction information in the small neighborhoods of image feature points are extracted, and the images’ prominent orientations are defined for direction-selective edge extraction. Using such improved edge information, we provide a highly accurate shape contour representation, which we also combine with texture features. The principle of the paper is to interpret an object as the fusion of its components: its extracted contour and its inner texture. Our goal in fusing textural and structural information is twofold: it is applied for automatic contour initialization, and it is also used to establish an improved external force field. This fusion then produces highly accurate salient object extractions. We performed extensive evaluations which confirm that the presented object extraction method outperforms parametric active contour models and achieves higher efficiency than the majority of the evaluated automatic saliency methods

    Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement.

    Get PDF
    Visual attention is a kind of fundamental cognitive capability that allows human beings to focus on the region of interests (ROIs) under complex natural environments. What kind of ROIs that we pay attention to mainly depends on two distinct types of attentional mechanisms. The bottom-up mechanism can guide our detection of the salient objects and regions by externally driven factors, i.e. color and location, whilst the top-down mechanism controls our biasing attention based on prior knowledge and cognitive strategies being provided by visual cortex. However, how to practically use and fuse both attentional mechanisms for salient object detection has not been sufficiently explored. To the end, we propose in this paper an integrated framework consisting of bottom-up and top-down attention mechanisms that enable attention to be computed at the level of salient objects and/or regions. Within our framework, the model of a bottom-up mechanism is guided by the gestalt-laws of perception. We interpreted gestalt-laws of homogeneity, similarity, proximity and figure and ground in link with color, spatial contrast at the level of regions and objects to produce feature contrast map. The model of top-down mechanism aims to use a formal computational model to describe the background connectivity of the attention and produce the priority map. Integrating both mechanisms and applying to salient object detection, our results have demonstrated that the proposed method consistently outperforms a number of existing unsupervised approaches on five challenging and complicated datasets in terms of higher precision and recall rates, AP (average precision) and AUC (area under curve) values

    Direction Selective Contour Detection for Salient Objects

    Get PDF
    corecore