965 research outputs found

    Inner and Inter Label Propagation: Salient Object Detection in the Wild

    Full text link
    In this paper, we propose a novel label propagation based method for saliency detection. A key observation is that saliency in an image can be estimated by propagating the labels extracted from the most certain background and object regions. For most natural images, some boundary superpixels serve as the background labels and the saliency of other superpixels are determined by ranking their similarities to the boundary labels based on an inner propagation scheme. For images of complex scenes, we further deploy a 3-cue-center-biased objectness measure to pick out and propagate foreground labels. A co-transduction algorithm is devised to fuse both boundary and objectness labels based on an inter propagation scheme. The compactness criterion decides whether the incorporation of objectness labels is necessary, thus greatly enhancing computational efficiency. Results on five benchmark datasets with pixel-wise accurate annotations show that the proposed method achieves superior performance compared with the newest state-of-the-arts in terms of different evaluation metrics.Comment: The full version of the TIP 2015 publicatio

    SaliencyRank: Two-stage manifold ranking for salient object detection

    Get PDF

    Partitioning intensity inhomogeneity colour images via Saliency-based active contour

    Get PDF
    Partitioning or segmenting intensity inhomogeneity colour images is a challenging problem in computer vision and image shape analysis. Given an input image, the active contour model (ACM) which is formulated in variational framework is regularly used to partition objects in the image. A selective type of variational ACM approach is better than a global approach for segmenting specific target objects, which is useful for applications such as tumor segmentation or tissue classification in medical imaging. However, the existing selective ACMs yield unsatisfactory outcomes when performing the segmentation for colour (vector-valued) with intensity variations. Therefore, our new approach incorporates both local image fitting and saliency maps into a new variational selective ACM to tackle the problem. The euler-lagrange (EL) equations were presented to solve the proposed model. Thirty combinations of synthetic and medical images were tested. The visual observation and quantitative results show that the proposed model outshines the other existing models by average, with the accuracy of 2.23% more than the compared model and the Dice and Jaccard coefficients which were around 12.78% and 19.53% higher, respectively, than the compared model

    DISC: Deep Image Saliency Computing via Progressive Representation Learning

    Full text link
    Salient object detection increasingly receives attention as an important component or step in several pattern recognition and image processing tasks. Although a variety of powerful saliency models have been intensively proposed, they usually involve heavy feature (or model) engineering based on priors (or assumptions) about the properties of objects and backgrounds. Inspired by the effectiveness of recently developed feature learning, we provide a novel Deep Image Saliency Computing (DISC) framework for fine-grained image saliency computing. In particular, we model the image saliency from both the coarse- and fine-level observations, and utilize the deep convolutional neural network (CNN) to learn the saliency representation in a progressive manner. Specifically, our saliency model is built upon two stacked CNNs. The first CNN generates a coarse-level saliency map by taking the overall image as the input, roughly identifying saliency regions in the global context. Furthermore, we integrate superpixel-based local context information in the first CNN to refine the coarse-level saliency map. Guided by the coarse saliency map, the second CNN focuses on the local context to produce fine-grained and accurate saliency map while preserving object details. For a testing image, the two CNNs collaboratively conduct the saliency computing in one shot. Our DISC framework is capable of uniformly highlighting the objects-of-interest from complex background while preserving well object details. Extensive experiments on several standard benchmarks suggest that DISC outperforms other state-of-the-art methods and it also generalizes well across datasets without additional training. The executable version of DISC is available online: http://vision.sysu.edu.cn/projects/DISC.Comment: This manuscript is the accepted version for IEEE Transactions on Neural Networks and Learning Systems (T-NNLS), 201
    • …
    corecore