398 research outputs found

    Inner and Inter Label Propagation: Salient Object Detection in the Wild

    Full text link
    In this paper, we propose a novel label propagation based method for saliency detection. A key observation is that saliency in an image can be estimated by propagating the labels extracted from the most certain background and object regions. For most natural images, some boundary superpixels serve as the background labels and the saliency of other superpixels are determined by ranking their similarities to the boundary labels based on an inner propagation scheme. For images of complex scenes, we further deploy a 3-cue-center-biased objectness measure to pick out and propagate foreground labels. A co-transduction algorithm is devised to fuse both boundary and objectness labels based on an inter propagation scheme. The compactness criterion decides whether the incorporation of objectness labels is necessary, thus greatly enhancing computational efficiency. Results on five benchmark datasets with pixel-wise accurate annotations show that the proposed method achieves superior performance compared with the newest state-of-the-arts in terms of different evaluation metrics.Comment: The full version of the TIP 2015 publicatio

    Improved salient object detection via boundary components affinity

    Get PDF
    Referring to the existing model that considers the image boundary as the image background, the model is still not able to produce an optimum detection. This paper is introducing the combination features at the boundary known as boundary components affinity that is capable to produce an optimum measure on the image background. It consists of contrast, spatial location, force interaction and boundary ratio that contribute to a novel boundary connectivity measure. The integrated features are capable to produce clearer background with minimum unwanted foreground patches compared to the ground truth. The extracted boundary features are integrated as the boundary components affinity. These features were used for measuring the image background through its boundary connectivity to obtain the final salient object detection. Using the verified datasets, the performance of the proposed model was measured and compared with the 4 state-of-art models. In addition, the model performance was tested on the close contrast images. The detection performance was compared and analysed based on the precision, recall, true positive rate, false positive rate, F Measure and Mean Absolute Error (MAE). The model had successfully reduced the MAE by maximum of 9.4%

    Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement.

    Get PDF
    Visual attention is a kind of fundamental cognitive capability that allows human beings to focus on the region of interests (ROIs) under complex natural environments. What kind of ROIs that we pay attention to mainly depends on two distinct types of attentional mechanisms. The bottom-up mechanism can guide our detection of the salient objects and regions by externally driven factors, i.e. color and location, whilst the top-down mechanism controls our biasing attention based on prior knowledge and cognitive strategies being provided by visual cortex. However, how to practically use and fuse both attentional mechanisms for salient object detection has not been sufficiently explored. To the end, we propose in this paper an integrated framework consisting of bottom-up and top-down attention mechanisms that enable attention to be computed at the level of salient objects and/or regions. Within our framework, the model of a bottom-up mechanism is guided by the gestalt-laws of perception. We interpreted gestalt-laws of homogeneity, similarity, proximity and figure and ground in link with color, spatial contrast at the level of regions and objects to produce feature contrast map. The model of top-down mechanism aims to use a formal computational model to describe the background connectivity of the attention and produce the priority map. Integrating both mechanisms and applying to salient object detection, our results have demonstrated that the proposed method consistently outperforms a number of existing unsupervised approaches on five challenging and complicated datasets in terms of higher precision and recall rates, AP (average precision) and AUC (area under curve) values

    A brief survey of visual saliency detection

    Get PDF
    corecore