427 research outputs found

    Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement.

    Get PDF
    Visual attention is a kind of fundamental cognitive capability that allows human beings to focus on the region of interests (ROIs) under complex natural environments. What kind of ROIs that we pay attention to mainly depends on two distinct types of attentional mechanisms. The bottom-up mechanism can guide our detection of the salient objects and regions by externally driven factors, i.e. color and location, whilst the top-down mechanism controls our biasing attention based on prior knowledge and cognitive strategies being provided by visual cortex. However, how to practically use and fuse both attentional mechanisms for salient object detection has not been sufficiently explored. To the end, we propose in this paper an integrated framework consisting of bottom-up and top-down attention mechanisms that enable attention to be computed at the level of salient objects and/or regions. Within our framework, the model of a bottom-up mechanism is guided by the gestalt-laws of perception. We interpreted gestalt-laws of homogeneity, similarity, proximity and figure and ground in link with color, spatial contrast at the level of regions and objects to produce feature contrast map. The model of top-down mechanism aims to use a formal computational model to describe the background connectivity of the attention and produce the priority map. Integrating both mechanisms and applying to salient object detection, our results have demonstrated that the proposed method consistently outperforms a number of existing unsupervised approaches on five challenging and complicated datasets in terms of higher precision and recall rates, AP (average precision) and AUC (area under curve) values

    RGB-D Salient Object Detection: A Survey

    Full text link
    Salient object detection (SOD), which simulates the human visual perception system to locate the most attractive object(s) in a scene, has been widely applied to various computer vision tasks. Now, with the advent of depth sensors, depth maps with affluent spatial information that can be beneficial in boosting the performance of SOD, can easily be captured. Although various RGB-D based SOD models with promising performance have been proposed over the past several years, an in-depth understanding of these models and challenges in this topic remains lacking. In this paper, we provide a comprehensive survey of RGB-D based SOD models from various perspectives, and review related benchmark datasets in detail. Further, considering that the light field can also provide depth maps, we review SOD models and popular benchmark datasets from this domain as well. Moreover, to investigate the SOD ability of existing models, we carry out a comprehensive evaluation, as well as attribute-based evaluation of several representative RGB-D based SOD models. Finally, we discuss several challenges and open directions of RGB-D based SOD for future research. All collected models, benchmark datasets, source code links, datasets constructed for attribute-based evaluation, and codes for evaluation will be made publicly available at https://github.com/taozh2017/RGBDSODsurveyComment: 24 pages, 12 figures. Has been accepted by Computational Visual Medi

    Visual Saliency Estimation and Its Applications

    Get PDF
    The human visual system can automatically emphasize some parts of the image and ignore the other parts when seeing an image or a scene. Visual Saliency Estimation (VSE) aims to imitate this functionality of the human visual system to estimate the degree of human attention attracted by different image regions and locate the salient object. The study of VSE will help us explore the way human visual systems extract objects from an image. It has wide applications, such as robot navigation, video surveillance, object tracking, self-driving, etc. The current VSE approaches on natural images models generic visual stimuli based on lower-level image features, e.g., locations, local/global contrast, and feature correlation. However, existing models still suffered from some drawbacks. First, these methods fail in the cases when the objects are near the image borders. Second, due to imperfect model assumptions, many methods cannot achieve good results when the images have complicated backgrounds. In this work, I focuses on solving these challenges on the natural images by proposing a new framework with more robust task-related priors, and I apply the framework to low-quality biomedical images. The new framework formulates VSE on natural images as a quadratic program (QP) problem. It proposes an adaptive center-based bias hypothesis to replace the most common image center-based center-bias, which is much more robust even when the objects are far away from the image center. Second, it models a new smoothness term to force similar color having similar saliency statistics, which is more robust than that based on region dissimilarity when the image has a complicated background or low contrast. The new approach achieves the best performance among 11 latest methods on three public datasets. Three approaches based on the framework by integrating both high-level domain-knowledge and robust low-level saliency assumptions are utilized to imitate the radiologists\u27 attention to detect breast tumors from breast ultrasound images
    • …
    corecore