536 research outputs found

    Pop-out and IOR in Static Scenes with Region Based Visual Attention

    Get PDF
    This paper proposes a novel approach to construct the saliency map by combining region-based maps of distinct features. The multiplication style feature fusion process in the natural visual attention is modelled as weighted average of the features under influence of the external top-down and the internal bottom-up inhibitions. The recently discovered aspect of feature-based inhibition is also included in the procedure of IOR along with the commonly implemented spatial and feature-map based inhibitions. Results obtained from the proposed method are compatible with the well known attention models but with the advantages of faster computation, direct usability of focus of attention in machine vision, and broader coverage of visually prominent objects

    Visual Clutter Study for Pedestrian Using Large Scale Naturalistic Driving Data

    Get PDF
    Some of the pedestrian crashes are due to driver’s late or difficult perception of pedestrian’s appearance. Recognition of pedestrians during driving is a complex cognitive activity. Visual clutter analysis can be used to study the factors that affect human visual search efficiency and help design advanced driver assistant system for better decision making and user experience. In this thesis, we propose the pedestrian perception evaluation model which can quantitatively analyze the pedestrian perception difficulty using naturalistic driving data. An efficient detection framework was developed to locate pedestrians within large scale naturalistic driving data. Visual clutter analysis was used to study the factors that may affect the driver’s ability to perceive pedestrian appearance. The candidate factors were explored by the designed exploratory study using naturalistic driving data and a bottom-up image-based pedestrian clutter metric was proposed to quantify the pedestrian perception difficulty in naturalistic driving data. Based on the proposed bottom-up clutter metrics and top-down pedestrian appearance based estimator, a Bayesian probabilistic pedestrian perception evaluation model was further constructed to simulate the pedestrian perception process

    Saliency propagation from simple to difficult

    Get PDF
    Saliency propagation has been widely adopted for identifying the most attractive object in an image. The propagation sequence generated by existing saliency detection methods is governed by the spatial relationships of image regions, i.e., the saliency value is transmitted between two adjacent regions. However, for the inhomogeneous difficult adjacent regions, such a sequence may incur wrong propagations. In this paper, we attempt to manipulate the propagation sequence for optimizing the propagation quality. Intuitively, we postpone the propagations to difficult regions and meanwhile advance the propagations to less ambiguous simple regions. Inspired by the theoretical results in educational psychology, a novel propagation algorithm employing the teaching-to-learn and learning-to-teach strategies is proposed to explicitly improve the propagation quality. In the teaching-to-learn step, a teacher is designed to arrange the regions from simple to difficult and then assign the simplest regions to the learner. In the learning-to-teach step, the learner delivers its learning confidence to the teacher to assist the teacher to choose the subsequent simple regions. Due to the interactions between the teacher and learner, the uncertainty of original difficult regions is gradually reduced, yielding manifest salient objects with optimized background suppression. Extensive experimental results on benchmark saliency datasets demonstrate the superiority of the proposed algorithm over twelve representative saliency detectors

    A brief survey of visual saliency detection

    Get PDF

    Pop-out and IOR in Static Scenes with Region Based Visual Attention

    Get PDF
    This paper proposes a novel approach to construct the saliency map by combining region-based maps of distinct features. The multiplication style feature fusion process in the natural visual attention is modelled as weighted average of the features under influence of the external top-down and the internal bottom-up inhibitions. The recently discovered aspect of feature-based inhibition is also included in the procedure of IOR along with the commonly implemented spatial and feature-map based inhibitions. Results obtained from the proposed method are compatible with the well known attention models but with the advantages of faster computation, direct usability of focus of attention in machine vision, and broader coverage of visually prominent objects

    Maximum saliency bias in binocular fusion

    Get PDF
    Subjective experience at any instant consists of a single (“unitary”), coherent interpretation of sense data rather than a “Bayesian blur” of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function

    Collaborative Artificial Intelligence Algorithms for Medical Imaging Applications

    Get PDF
    In this dissertation, we propose novel machine learning algorithms for high-risk medical imaging applications. Specifically, we tackle current challenges in radiology screening process and introduce cutting-edge methods for image-based diagnosis, detection and segmentation. We incorporate expert knowledge through eye-tracking, making the whole process human-centered. This dissertation contributes to machine learning, computer vision, and medical imaging research by: 1) introducing a mathematical formulation of radiologists level of attention, and sparsifying their gaze data for a better extraction and comparison of search patterns. 2) proposing novel, local and global, image analysis algorithms. Imaging based diagnosis and pattern analysis are high-risk Artificial Intelligence applications. A standard radiology screening procedure includes detection, diagnosis and measurement (often done with segmentation) of abnormalities. We hypothesize that having a true collaboration is essential for a better control mechanism, in such applications. In this regard, we propose to form a collaboration medium between radiologists and machine learning algorithms through eye-tracking. Further, we build a generic platform consisting of novel machine learning algorithms for each of these tasks. Our collaborative algorithm utilizes eye tracking and includes an attention model and gaze-pattern analysis, based on data clustering and graph sparsification. Then, we present a semi-supervised multi-task network for local analysis of image in radiologists\u27 ROIs, extracted in the previous step. To address missing tumors and analyze regions that are completely missed by radiologists during screening, we introduce a detection framework, S4ND: Single Shot Single Scale Lung Nodule Detection. Our proposed detection algorithm is specifically designed to handle tiny abnormalities in lungs, which are easy to miss by radiologists. Finally, we introduce a novel projective adversarial framework, PAN: Projective Adversarial Network for Medical Image Segmentation, for segmenting complex 3D structures/organs, which can be beneficial in the screening process by guiding radiologists search areas through segmentation of desired structure/organ

    Saliency Methods for Object Discovery Based on Image and Depth Segmentation

    Get PDF
    Object discovery is a recent paradigm in computer and robotic vision where the process of interpreting an image starts by proposing a set of candidate regions that potentially correspond to objects; these candidates can be validated later on by object recognition modules or by robot interaction. In this thesis, we propose a novel method for object discovery that works on single RGB-D images and aims at achieving higher recall than current state-of-the-art methods with fewer candidates. Our approach uses saliency as a cue to roughly estimate the location and extent of the objects, and segmentation processes in order to identify the candidates' precise boundaries. We investigate the performance of four different segmentation methods based on colour, depth, an early and a late fusion of colour and depth, and conclude that the late fusion is the most successful. The object candidates are sorted according to a novel ranking strategy based on a combination of features such as 3D convexity and saliency. We evaluate our method and compare it to other state-of-the-art approaches in object discovery on challenging real world sequences from three different public datasets containing a high degree of clutter. The results show that our approach consistently outperforms the other methods. In the second part of this thesis, we turn to streams of images. Here, our goal is to generate as few object candidates per frame as necessary in order to find as many objects as possible throughout the sequence. Therefore, we propose to extend our object discovery system with a so called spatial inhibition of return mechanism to inhibit object candidates that correspond to objects that have already been generated in the past. The challenge here is to inhibit the candidates consistently with viewpoint change, and therefore, we root our inhibition of return mechanism in 3D spatial coordinates. In the final part of this thesis we show an application of our object discovery method to the task of salient object segmentation. The results show that our method achieves state-of-the-art performance

    Exploiting visual saliency for assessing the impact of car commercials upon viewers

    Get PDF
    Content based video indexing and retrieval (CBVIR) is a lively area of research which focuses on automating the indexing, retrieval and management of videos. This area has a wide spectrum of promising applications where assessing the impact of audiovisual productions emerges as a particularly interesting and motivating one. In this paper we present a computational model capable to predict the impact (i.e. positive or negative) upon viewers of car advertisements videos by using a set of visual saliency descriptors. Visual saliency provides information about parts of the image perceived as most important, which are instinctively targeted by humans when looking at a picture or watching a video. For this reason we propose to exploit visual information, introducing it as a new feature which reflects high-level semantics objectively, to improve the video impact categorization results. The suggested salience descriptors are inspired by the mechanisms that underlie the attentional abilities of the human visual system and organized into seven distinct families according to different measurements over the identified salient areas in the video frames, namely population, size, location, geometry, orientation, movement and photographic composition. Proposed approach starts by computing saliency maps for all the video frames, where two different visual saliency detection frameworks have been considered and evaluated: the popular graph based visual saliency (GBVS) algorithm, and a state-of-the-art DNN-based approach.This work has been partially supported by the National Grants RTC-2016-5305-7 and TEC2014-53390-P of the Spanish Ministry of Economy and Competitiveness.Publicad
    • 

    corecore