171 research outputs found

    Quantitative Analysis of Saliency Models

    Full text link
    Previous saliency detection research required the reader to evaluate performance qualitatively, based on renderings of saliency maps on a few shapes. This qualitative approach meant it was unclear which saliency models were better, or how well they compared to human perception. This paper provides a quantitative evaluation framework that addresses this issue. In the first quantitative analysis of 3D computational saliency models, we evaluate four computational saliency models and two baseline models against ground-truth saliency collected in previous work.Comment: 10 page

    Region-based saliency estimation for 3D shape analysis and understanding

    Get PDF
    The detection of salient regions is an important pre-processing step for many 3D shape analysis and understanding tasks. This paper proposes a novel method for saliency detection in 3D free form shapes. Firstly, we smooth the surface normals by a bilateral filter. Such a method is capable of smoothing the surfaces and retaining the local details. Secondly, a novel method is proposed for the estimation of the saliency value of each vertex. To this end, two new features are defined: Retinex-based Importance Feature (RIF) and Relative Normal Distance (RND). They are based on the human visual perception characteristics and surface geometry respectively. Since the vertex based method cannot guarantee that the detected salient regions are semantically continuous and complete, we propose to refine such values based on surface patches. The detected saliency is finally used to guide the existing techniques for mesh simplification, interest point detection, and overlapping point cloud registration. The comparative studies based on real data from three publicly accessible databases show that the proposed method usually outperforms five selected state of the art ones both qualitatively and quantitatively for saliency detection and 3D shape analysis and understanding

    Visual attention models and applications to 3D computer graphics

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Ph. D.) -- Bilkent University, 2012.Includes bibliographical refences.3D computer graphics, with the increasing technological and computational opportunities, have advanced to very high levels that it is possible to generate very realistic computer-generated scenes in real-time for games and other interactive environments. However, we cannot claim that computer graphics research has reached to its limits. Rendering photo-realistic scenes still cannot be achieved in real-time; and improving visual quality and decreasing computational costs are still research areas of great interest. Recent e orts in computer graphics have been directed towards exploiting principles of human visual perception to increase visual quality of rendering. This is natural since in computer graphics, the main source of evaluation is the judgment of people, which is based on their perception. In this thesis, our aim is to extend the use of perceptual principles in computer graphics. Our contribution is two-fold: First, we present several models to determine the visually important, salient, regions in a 3D scene. Secondly, we contribute to use of de nition of saliency metrics in computer graphics. Human visual attention is composed of two components, the rst component is the stimuli-oriented, bottom-up, visual attention; and the second component is task-oriented, top-down visual attention. The main di erence between these components is the role of the user. In the top-down component, viewer's intention and task a ect perception of the visual scene as opposed to the bottom-up component. We mostly investigate the bottom-up component where saliency resides. We de ne saliency computation metrics for two types of graphical contents. Our rst metric is applicable to 3D mesh models that are possibly animating, and it extracts saliency values for each vertex of the mesh models. The second metric we propose is applicable to animating objects and nds visually important objects due to their motion behaviours. In a third model, we present how to adapt the second metric for the animated 3D meshes. Along with the metrics of saliency, we also present possible application areas and a perceptual method to accelerate stereoscopic rendering, which is based on binocular vision principles and makes use of saliency information in a stereoscopic rendering scene. Each of the proposed models are evaluated with formal experiments. The proposed saliency metrics are evaluated via eye-tracker based experiments and the computationally salient regions are found to attract more attention in practice too. For the stereoscopic optimization part, we have performed a detailed experiment and veri ed our model of optimization. In conclusion, this thesis extends the use of human visual system principles in 3D computer graphics, especially in terms of saliency.Bülbül, Muhammed AbdullahPh.D

    Towards Modelling of Visual Saliency in Point Clouds for Immersive Applications

    Get PDF
    Modelling human visual attention is of great importance in the field of computer vision and has been widely explored for 3D imaging. Yet, in the absence of ground truth data, it is unclear whether such predictions are in alignment with the actual human viewing behavior in virtual reality environments. In this study, we work towards solving this problem by conducting an eye-tracking experiment in an immersive 3D scene that offers 6 degrees of freedom. A wide range of static point cloud models is inspected by human subjects, while their gaze is captured in real-time. The visual attention information is used to extract fixation density maps, that can be further exploited for saliency modelling. To obtain high quality fixation points, we devise a scheme that utilizes every recorded gaze measurement from the two eye-cameras of our set-up. The obtained fixation density maps together with the recorded gaze and head trajectories are made publicly available, to enrich visual saliency datasets for 3D models

    Complexity and Salience: Evaluating the Inter-Scene Variability of Animated Choropleth Maps

    Get PDF
    Animated choropleth maps allow for the compilation of potentially massive time-series datasets which can portray space-time change in a congruent manner. They are also becoming increasingly common for data visualization. When users view and interact with these maps, however, there is the likelihood that the human cognitive-perceptual system may be overwhelmed by a large number of simultaneous changes in each scene: this so-called `change blindness\u27 is a common malady when viewing successive scenes, unless scene-to-scene graphical changes are salient enough to attract the fixation of the user. Even then, there may be a limit to the number of simultaneous changes that the user can perceive. This thesis examined the saliency of change occurring in map features by conducting a human-subjects study to explore the effect of intensity, number and pattern of change-clusters on a map user\u27s ability to detect change. These characteristics can be quantified for a given animated choropleth map using a localized change metric, Magnitude of Change. This study found that, for generalized choropleth maps, clusters in which at least 80% of the polygons changed class were significantly more likely to be successfully detected than clusters with lower levels of class change; additionally, users performed more poorly with maps containing single clusters than for those with multiple clusters. There were no differences in accuracy for gender, or for whether or not the user played video games regularly, but domain expertise (i.e., having taken a prior geography class) had a positive effect on accuracy. It appears that, for maximum effectiveness, animated choropleth maps should consist of limited datasets, and be made simpler and more user-friendly
    • …
    corecore