101 research outputs found

    A clustering-based method to estimate saliency in 3D animated meshes

    Get PDF
    We present a model to determine the perceptually significant elements in animated 3D scenes using a motion-saliency method. Our model clusters vertices with similar motion-related behaviors. To find these similarities, for each frame of an animated mesh sequence, vertices' motion properties are analyzed and clustered using a Gestalt approach. Each cluster is analyzed as a single unit and representative vertices of each cluster are used to extract the motion-saliency values of each group. We evaluate our method by performing an eye-tracker-based user study in which we analyze observers' reactions to vertices with high and low saliencies. The experiment results verify that our proposed model correctly detects the regions of interest in each frame of an animated mesh. © 2014 Elsevier Ltd

    Visual attention models and applications to 3D computer graphics

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Ph. D.) -- Bilkent University, 2012.Includes bibliographical refences.3D computer graphics, with the increasing technological and computational opportunities, have advanced to very high levels that it is possible to generate very realistic computer-generated scenes in real-time for games and other interactive environments. However, we cannot claim that computer graphics research has reached to its limits. Rendering photo-realistic scenes still cannot be achieved in real-time; and improving visual quality and decreasing computational costs are still research areas of great interest. Recent e orts in computer graphics have been directed towards exploiting principles of human visual perception to increase visual quality of rendering. This is natural since in computer graphics, the main source of evaluation is the judgment of people, which is based on their perception. In this thesis, our aim is to extend the use of perceptual principles in computer graphics. Our contribution is two-fold: First, we present several models to determine the visually important, salient, regions in a 3D scene. Secondly, we contribute to use of de nition of saliency metrics in computer graphics. Human visual attention is composed of two components, the rst component is the stimuli-oriented, bottom-up, visual attention; and the second component is task-oriented, top-down visual attention. The main di erence between these components is the role of the user. In the top-down component, viewer's intention and task a ect perception of the visual scene as opposed to the bottom-up component. We mostly investigate the bottom-up component where saliency resides. We de ne saliency computation metrics for two types of graphical contents. Our rst metric is applicable to 3D mesh models that are possibly animating, and it extracts saliency values for each vertex of the mesh models. The second metric we propose is applicable to animating objects and nds visually important objects due to their motion behaviours. In a third model, we present how to adapt the second metric for the animated 3D meshes. Along with the metrics of saliency, we also present possible application areas and a perceptual method to accelerate stereoscopic rendering, which is based on binocular vision principles and makes use of saliency information in a stereoscopic rendering scene. Each of the proposed models are evaluated with formal experiments. The proposed saliency metrics are evaluated via eye-tracker based experiments and the computationally salient regions are found to attract more attention in practice too. For the stereoscopic optimization part, we have performed a detailed experiment and veri ed our model of optimization. In conclusion, this thesis extends the use of human visual system principles in 3D computer graphics, especially in terms of saliency.Bülbül, Muhammed AbdullahPh.D

    Animated mesh simplification based on saliency metrics

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2008.Thesis (Master's) -- Bilkent University, 2008.Includes bibliographical references leaves 29-33.Mesh saliency identifies the visually important parts of a mesh. Mesh simplification algorithms using mesh saliency as simplification criterion preserve the salient features of a static 3D model. In this thesis, we propose a saliency measure that will be used to simplify animated 3D models. This saliency measure uses the acceleration and deceleration information about a dynamic 3D mesh in addition to the saliency information for static meshes. This provides the preservation of sharp features and visually important cues during animation. Since oscillating motions are also important in determining saliency, we propose a technique to detect oscillating motions and incorporate it into the saliency based animated model simplification algorithm. The proposed technique is experimented on animated models making oscillating motions and promising visual results are obtained.Tolgay, AhmetM.S

    Perceived quality assessment in object-space for animated 3D models

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Master's) -- Bilkent University, 2012.Includes bibliographical refences.Computational models and methods to handle 3D graphics objects continue to emerge with the wide-range use of 3D models and rapid development of computer graphics technology. Many 3D model modification methods exist to improve computation and transfer time of 3D models in real-time computer graphics applications. Providing user with the least visually-deformed model is essential for 3D modification tasks. In this thesis, we propose a method to estimate the visually perceived differences on animated 3D models. The model makes use of Human Visual System models to mimic visual perception. It can also be used to generate a 3D sensitivity map for a model to act as a guide during the application of modifications. Our approach gives a perceived quality measure using 3D geometric representation by incorporating two factors of Human Visual System (HVS) that contribute to perception of differences. First, spatial processing of human vision model enables us to predict deformations on the surface. Secondly, temporal effects of animation velocity are predicted. Psychophysical experiment data is used for both of these HVS models. We used subjective experiments to verify the validity of our proposed method.Yakut, Işıl DoğaM.S

    Subdivision surface fitting to a dense mesh using ridges and umbilics

    Get PDF
    Fitting a sparse surface to approximate vast dense data is of interest for many applications: reverse engineering, recognition and compression, etc. The present work provides an approach to fit a Loop subdivision surface to a dense triangular mesh of arbitrary topology, whilst preserving and aligning the original features. The natural ridge-joined connectivity of umbilics and ridge-crossings is used as the connectivity of the control mesh for subdivision, so that the edges follow salient features on the surface. Furthermore, the chosen features and connectivity characterise the overall shape of the original mesh, since ridges capture extreme principal curvatures and ridges start and end at umbilics. A metric of Hausdorff distance including curvature vectors is proposed and implemented in a distance transform algorithm to construct the connectivity. Ridge-colour matching is introduced as a criterion for edge flipping to improve feature alignment. Several examples are provided to demonstrate the feature-preserving capability of the proposed approach

    Investigating human-perceptual properties of "shapes" using 3D shapes and 2D fonts

    Get PDF
    Shapes are generally used to convey meaning. They are used in video games, films and other multimedia, in diverse ways. 3D shapes may be destined for virtual scenes or represent objects to be constructed in the real-world. Fonts add character to an otherwise plain block of text, allowing the writer to make important points more visually prominent or distinct from other text. They can indicate the structure of a document, at a glance. Rather than studying shapes through traditional geometric shape descriptors, we provide alternative methods to describe and analyse shapes, from a lens of human perception. This is done via the concepts of Schelling Points and Image Specificity. Schelling Points are choices people make when they aim to match with what they expect others to choose but cannot communicate with others to determine an answer. We study whole mesh selections in this setting, where Schelling Meshes are the most frequently selected shapes. The key idea behind image Specificity is that different images evoke different descriptions; but ‘Specific’ images yield more consistent descriptions than others. We apply Specificity to 2D fonts. We show that each concept can be learned and predict them for fonts and 3D shapes, respectively, using a depth image-based convolutional neural network. Results are shown for a range of fonts and 3D shapes and we demonstrate that font Specificity and the Schelling meshes concept are useful for visualisation, clustering, and search applications. Overall, we find that each concept represents similarities between their respective type of shape, even when there are discontinuities between the shape geometries themselves. The ‘context’ of these similarities is in some kind of abstract or subjective meaning which is consistent among different people

    Enabling Viewpoint Learning through Dynamic Label Generation

    Get PDF
    Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpoint qualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack of closed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to separate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the influence of the mesh quality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approach insensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise in this context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the label decision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint predictions for models from different object categories and for different viewpoint qualities. Additionally, we show that prediction times are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality evaluation. We will further release the code and training data, which will to our knowledge be the biggest viewpoint quality dataset available

    Copyright Protection of 3D Digitized Sculptures by Use of Haptic Device for Adding Local-Imperceptible Bumps

    Get PDF
    This research aims to improve some approaches for protecting digitized 3D models of cultural heritage objects such as the approach shown in the authors\u27 previous research on this topic. This technique can be used to protect works of art such as 3D models of sculptures, pottery, and 3D digital characters for animated film and gaming. It can also be used to preserve architectural heritage. In the research presented here adding protection to the scanned 3D model of the original sculpture was achieved using the digital sculpting technique with a haptic device. The original 3D model and the model with added protection were after that printed at the 3D printer, and then such 3D printed models were scanned. In order to measure the thickness of added protection, the original 3D model and the model with added protection were compared. Also, two scanned models of the printed sculptures were compared to define the amount of added material. The thickness of the added protection is up to 2 mm, whereas the highest difference detected between a matching scan of the original sculpture (or protected 3D model) and a scan of its printed version (or scan of the protected printed version) is about 1 mm

    Digital Processing and Management Tools for 2D and 3D Shape Repositories

    No full text
    corecore