1,022 research outputs found
Perceived quality assessment in object-space for animated 3D models
Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Master's) -- Bilkent University, 2012.Includes bibliographical refences.Computational models and methods to handle 3D graphics objects continue to
emerge with the wide-range use of 3D models and rapid development of computer
graphics technology. Many 3D model modification methods exist to improve
computation and transfer time of 3D models in real-time computer graphics
applications. Providing user with the least visually-deformed model is essential
for 3D modification tasks.
In this thesis, we propose a method to estimate the visually perceived differences
on animated 3D models. The model makes use of Human Visual System models
to mimic visual perception. It can also be used to generate a 3D sensitivity map
for a model to act as a guide during the application of modifications.
Our approach gives a perceived quality measure using 3D geometric representation
by incorporating two factors of Human Visual System (HVS) that contribute
to perception of differences. First, spatial processing of human vision model
enables us to predict deformations on the surface. Secondly, temporal effects of
animation velocity are predicted. Psychophysical experiment data is used for both
of these HVS models. We used subjective experiments to verify the validity of
our proposed method.Yakut, Işıl DoğaM.S
Visual attention models and applications to 3D computer graphics
Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Ph. D.) -- Bilkent University, 2012.Includes bibliographical refences.3D computer graphics, with the increasing technological and computational
opportunities, have advanced to very high levels that it is possible to generate very
realistic computer-generated scenes in real-time for games and other interactive
environments. However, we cannot claim that computer graphics research has
reached to its limits. Rendering photo-realistic scenes still cannot be achieved in
real-time; and improving visual quality and decreasing computational costs are
still research areas of great interest.
Recent e orts in computer graphics have been directed towards exploiting
principles of human visual perception to increase visual quality of rendering.
This is natural since in computer graphics, the main source of evaluation is the
judgment of people, which is based on their perception. In this thesis, our aim is
to extend the use of perceptual principles in computer graphics. Our contribution
is two-fold: First, we present several models to determine the visually important,
salient, regions in a 3D scene. Secondly, we contribute to use of de nition of
saliency metrics in computer graphics.
Human visual attention is composed of two components, the rst component
is the stimuli-oriented, bottom-up, visual attention; and the second component
is task-oriented, top-down visual attention. The main di erence between these
components is the role of the user. In the top-down component, viewer's intention
and task a ect perception of the visual scene as opposed to the bottom-up component.
We mostly investigate the bottom-up component where saliency resides.
We de ne saliency computation metrics for two types of graphical contents.
Our rst metric is applicable to 3D mesh models that are possibly animating, and
it extracts saliency values for each vertex of the mesh models. The second metric we propose is applicable to animating objects and nds visually important objects
due to their motion behaviours. In a third model, we present how to adapt the
second metric for the animated 3D meshes.
Along with the metrics of saliency, we also present possible application areas
and a perceptual method to accelerate stereoscopic rendering, which is based on
binocular vision principles and makes use of saliency information in a stereoscopic
rendering scene.
Each of the proposed models are evaluated with formal experiments. The
proposed saliency metrics are evaluated via eye-tracker based experiments and
the computationally salient regions are found to attract more attention in practice
too. For the stereoscopic optimization part, we have performed a detailed
experiment and veri ed our model of optimization.
In conclusion, this thesis extends the use of human visual system principles
in 3D computer graphics, especially in terms of saliency.Bülbül, Muhammed AbdullahPh.D
Occlusion: Creating Disorientation, Fugue, and Apophenia in an Art Game
Occlusion is a procedurally randomized interactive art experience which uses the motifs of repetition, isolation, incongruity and mutability to develop an experience of a Folie àDeux: a madness shared by two. It draws from traditional video game forms, development methods, and tools to situate itself in context with games as well as other forms of interactive digital media. In this way, Occlusion approaches the making of game-like media from the art criticism perspective of Materiality, and the written work accompanying the prototype discusses critical aesthetic concerns for Occlusion both as an art experience borrowing from games and as a text that can be academically understood in relation to other practices of media making. In addition to the produced software artifact and written analysis, this thesis includes primary research in the form of four interviews with artists, authors, game makers and game critics concerning Materiality and dissociative themes in game-like media. The written work first introduces Occlusion in context with other approaches to procedural remixing, Glitch Art, net.art, and analogue and digital collage and décollage, with special attention to recontextualization and apophenia. The experience, visual, and audio design approach of Occlusion is reviewed through a discussion of explicit design choices which define generative space. Development process, release process, post-release distribution, testing, and maintenance are reviewed, and the paper concludes with a description of future work and a post- mortem discussion. Included as appendices are a full specification document, script, and transcripts of all interviews
Enhancing Mesh Deformation Realism: Dynamic Mesostructure Detailing and Procedural Microstructure Synthesis
Propomos uma solução para gerar dados de mapas de relevo dinâmicos para simular deformações em superfícies macias, com foco na pele humana. A solução incorpora a simulação de rugas ao nível mesoestrutural e utiliza texturas procedurais para adicionar detalhes de microestrutura estáticos. Oferece flexibilidade além da pele humana, permitindo a geração de padrões que imitam deformações em outros materiais macios, como couro, durante a animação.
As soluções existentes para simular rugas e pistas de deformação frequentemente dependem de hardware especializado, que é dispendioso e de difícil acesso. Além disso, depender exclusivamente de dados capturados limita a direção artística e dificulta a adaptação a mudanças. Em contraste, a solução proposta permite a síntese dinâmica de texturas que se adaptam às deformações subjacentes da malha de forma fisicamente plausível.
Vários métodos foram explorados para sintetizar rugas diretamente na geometria, mas sofrem de limitações como auto-interseções e maiores requisitos de armazenamento. A intervenção manual de artistas na criação de mapas de rugas e mapas de tensão permite controle, mas pode ser limitada em deformações complexas ou onde maior realismo seja necessário.
O nosso trabalho destaca o potencial dos métodos procedimentais para aprimorar a geração de padrões de deformação dinâmica, incluindo rugas, com maior controle criativo e sem depender de dados capturados. A incorporação de padrões procedimentais estáticos melhora o realismo, e a abordagem pode ser estendida além da pele para outros materiais macios.We propose a solution for generating dynamic heightmap data to simulate deformations for soft surfaces, with a focus on human skin. The solution incorporates mesostructure-level wrinkles and utilizes procedural textures to add static microstructure details. It offers flexibility beyond human skin, enabling the generation of patterns mimicking deformations in other soft materials, such as leater, during animation.
Existing solutions for simulating wrinkles and deformation cues often rely on specialized hardware, which is costly and not easily accessible. Moreover, relying solely on captured data limits artistic direction and hinders adaptability to changes. In contrast, our proposed solution provides dynamic texture synthesis that adapts to underlying mesh deformations.
Various methods have been explored to synthesize wrinkles directly to the geometry, but they suffer from limitations such as self-intersections and increased storage requirements. Manual intervention by artists using wrinkle maps and tension maps provides control but may be limited to the physics-based simulations.
Our research presents the potential of procedural methods to enhance the generation of dynamic deformation patterns, including wrinkles, with greater creative control and without reliance on captured data. Incorporating static procedural patterns improves realism, and the approach can be extended to other soft-materials beyond skin
The Rocketbox Library and the Utility of Freely Available Rigged Avatars
As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption
SPRITE TREE: AN EFFICIENT IMAGE-BASED REPRESENTATION FOR NETWORKED VIRTUAL ENVIRONMENTS
Ph.DDOCTOR OF PHILOSOPH
Real-time rendering and simulation of trees and snow
Tree models created by an industry used package are exported and the structure extracted in order to procedurally regenerate the geometric mesh, addressing the limitations of the application's standard output. The structure, once extracted, is used to fully generate a high quality skeleton for the tree, individually representing each
section in every branch to give the greatest achievable level of freedom of deformation and animation. Around the generated skeleton, a new geometric mesh is wrapped
using a single, continuous surface resulting in the removal of intersection based render artefacts. Surface smoothing and enhanced detail is added to the model dynamically
using the GPU enhanced tessellation engine.
A real-time snow accumulation system is developed to generate snow cover on a dynamic, animated scene. Occlusion techniques are used to project snow accumulating faces and map exposed areas to applied accumulation maps in the form of dynamic textures. Accumulation maps are xed to applied surfaces, allowing moving objects to maintain accumulated snow cover. Mesh generation is performed dynamically during the rendering pass using surface o�setting and tessellation to enhance
required detail
Network streaming and compression for mixed reality tele-immersion
Bulterman, D.C.A. [Promotor]Cesar, P.S. [Copromotor
- …