1,347 research outputs found

    The effect of non-humanoid size cues for the perception of physics plausibility in virtual reality

    Get PDF
    Abstract. This thesis studies the relationship between inhabited scale and the perception of physics in virtual reality. The work builds upon the findings of an earlier study on the perception of physics when a user is virtually scaled down. One of these studies involved having users evaluate the movement of soda tabs dropped and thrown by a doll-sized humanoid robot when the user was either scaled normally or scaled down. This thesis aimed to replicate the study with the alteration of using a cat as a more natural, non-humanoid actor to throw the soda tabs. Similarly to the previous study, it was hypothesized that participants would prefer realistic physics when at a normal scale and unrealistic physics when virtually scaled down. For this, a photo-realistic virtual environment and a realistic animated cat were created. The method of study involved participants observing the cat drop soda tabs from an elevated platform. Participants experienced the event with both realistic physics (dubbed true physics) and unrealistic physics (dubbed movie physics) and were asked to choose the one they perceived as most expected. This method was repeated for participants at a normal scale and when they were virtually scaled down. The study recruited 40 participants, and the results were unable to confirm either hypothesis and were unable to find a preference towards either physics preference. The result differs from Pouke’s study which was able to find a preference for movie physics when participants were virtually scaled down. This thesis discusses the findings and also uses supplementary gathered data to offer potential rationalizations and insights into the received result.Ei-humanoidin koko vihjeiden vaikutus fysiikan uskottavuuden havainnollistamiseen virtuaalitodellisuudessa. Tiivistelmä. Tämä diplomityö tutkii käyttäjän koon ja fysiikan havainnollistamisen välistä suhdetta virtuaalitodellisuudessa. Tämä työ perustuu Pouken tekemiin löytöihin fysiikan havaitsemisessa kun käyttäjää virtuaalisesti kutistetaan virtuaalitodellisuudessa. Yhdessä näistä tutkimuksista käyttäjiä kysyttiin arvioimaan nukkekokoisen humanoidirobotin heittämien tölkinrenkaiden liikettä kun käyttäjä oli joko normaalin kokoinen tai virtuaalisesti kutistettu pieneksi. Tämän työn tavoitteena oli toistaa kyseinen tutkimus, mutta vaihtaa humanoidirobotin tilalle kissa toimimaan luonnollisempana tekijänä. Kuten aiemassa tutkimuksessa, tämän työn hypoteesiksi oletetaan että käyttäjät suosivat todenmukaista fysiikkaa normaalissa mittakaavassa ja epärealistista fysiikkaa kutistettuna. Tämän selvittämistä varten luotiin fotorealistinen virtuaaliympäristö sekä realistisesti animoitu kissa. Tutkimuksen menetelmässä osallistujat tarkkailivat kissaa, joka pudotti tölkinrenkaita korotetulta alustalta. Osallistujat kokivat tapahtuman sekä realistisella fysiikalla että epärealistisella fysiikalla, ja heitä pyydettiin valitsemaan se, jonka he pitivät odotetuimpana. Tämä menetelmä toistettiin osallistujille normaalissa mittakaavassa ja kutistettuna. Tutkimukseen rekrytoitiin 40 osallistujaa, ja tulokset eivät pystyneet vahvistamaan kumpaakaan hypoteesia eivätkä löytäneet mieltymystä kumpaankaan fysiikkaan. Tulos eroaa edellisestä tutkimuksesta, joka löysi mieltymyksen epärealistiseen fysiikkaan, kun osallistujia oli kutistettuna. Tässä työssä keskustellaan tästä havainnosta sekä tarjotaan mahdollisia rationalisointeja ja muita löydöksiä saaduista täydentävistä tuloksista

    Axiom

    Get PDF
    Axiom is a short narrative video, using live-action and 3D computer graphics to re-interpret the Greek myth of Persephone through the framework of contemporary science-fiction. The aim of the video is to use narrative, approached from a design strategy that thematizes the representational role of 3D computer generated graphics, to raise questions in the minds of thoughtful viewers about the use of genetics and computer simulated worlds in the context of the vulnerable natural environment. The conceptualization of the video is explained, followed by a summary of the production pipeline, and finally an evaluation of the finished piece

    Synthesization and reconstruction of 3D faces by deep neural networks

    Get PDF
    The past few decades have witnessed substantial progress towards 3D facial modelling and reconstruction as it is high importance for many computer vision and graphics applications including Augmented/Virtual Reality (AR/VR), computer games, movie post-production, image/video editing, medical applications, etc. In the traditional approaches, facial texture and shape are represented as triangle mesh that can cover identity and expression variation with non-rigid deformation. A dataset of 3D face scans is then densely registered into a common topology in order to construct a linear statistical model. Such models are called 3D Morphable Models (3DMMs) and can be used for 3D face synthesization or reconstruction by a single or few 2D face images. The works presented in this thesis focus on the modernization of these traditional techniques in the light of recent advances of deep learning and thanks to the availability of large-scale datasets. Ever since the introduction of 3DMMs by over two decades, there has been a lot of progress on it and they are still considered as one of the best methodologies to model 3D faces. Nevertheless, there are still several aspects of it that need to be upgraded to the "deep era". Firstly, the conventional 3DMMs are built by linear statistical approaches such as Principal Component Analysis (PCA) which omits high-frequency information by its nature. While this does not curtail shape, which is often smooth in the original data, texture models are heavily afflicted by losing high-frequency details and photorealism. Secondly, the existing 3DMM fitting approaches rely on very primitive (i.e. RGB values, sparse landmarks) or hand-crafted features (i.e. HOG, SIFT) as supervision that are sensitive to "in-the-wild" images (i.e. lighting, pose, occlusion), or somewhat missing identity/expression resemblance with the target image. Finally, shape, texture, and expression modalities are separately modelled by ignoring the correlation among them, placing a fundamental limit to the synthesization of semantically meaningful 3D faces. Moreover, photorealistic 3D face synthesis has not been studied thoroughly in the literature. This thesis attempts to address the above-mentioned issues by harnessing the power of deep neural network and generative adversarial networks as explained below: Due to the linear texture models, many of the state-of-the-art methods are still not capable of reconstructing facial textures with high-frequency details. For this, we take a radically different approach and build a high-quality texture model by Generative Adversarial Networks (GANs) that preserves details. That is, we utilize GANs to train a very powerful generator of facial texture in the UV space. And then show that it is possible to employ this generator network as a statistical texture prior to 3DMM fitting. The resulting texture reconstructions are plausible and photorealistic as GANs are faithful to the real-data distribution in both low- and high- frequency domains. Then, we revisit the conventional 3DMM fitting approaches making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. We propose to optimize the parameters with the supervision of pretrained deep identity features through our end-to-end differentiable framework. In order to be robust towards initialization and expedite the fitting process, we also propose a novel self-supervised regression-based approach. We demonstrate excellent 3D face reconstructions that are photorealistic and identity preserving and achieve for the first time, to the best of our knowledge, facial texture reconstruction with high-frequency details. In order to extend the non-linear texture model for photo-realistic 3D face synthesis, we present a methodology that generates high-quality texture, shape, and normals jointly. To do so, we propose a novel GAN that can generate data from different modalities while exploiting their correlations. Furthermore, we demonstrate how we can condition the generation on the expression and create faces with various facial expressions. Additionally, we study another approach for photo-realistic face synthesis by 3D guidance. This study proposes to generate 3D faces by linear 3DMM and then augment their 2D rendering by an image-to-image translation network to the photorealistic face domain. Both works demonstrate excellent photorealistic face synthesis and show that the generated faces are improving face recognition benchmarks as synthetic training data. Finally, we study expression reconstruction for personalized 3D face models where we improve generalization and robustness of expression encoding. First, we propose a 3D augmentation approach on 2D head-mounted camera images to increase robustness to perspective changes. And, we also propose to train generic expression encoder network by populating the number of identities with a novel multi-id personalized model training architecture in a self-supervised manner. Both approaches show promising results in both qualitative and quantitative experiments.Open Acces

    Real-time simulation and visualisation of cloth using edge-based adaptive meshes

    Get PDF
    Real-time rendering and the animation of realistic virtual environments and characters has progressed at a great pace, following advances in computer graphics hardware in the last decade. The role of cloth simulation is becoming ever more important in the quest to improve the realism of virtual environments. The real-time simulation of cloth and clothing is important for many applications such as virtual reality, crowd simulation, games and software for online clothes shopping. A large number of polygons are necessary to depict the highly exible nature of cloth with wrinkling and frequent changes in its curvature. In combination with the physical calculations which model the deformations, the effort required to simulate cloth in detail is very computationally expensive resulting in much diffculty for its realistic simulation at interactive frame rates. Real-time cloth simulations can lack quality and realism compared to their offline counterparts, since coarse meshes must often be employed for performance reasons. The focus of this thesis is to develop techniques to allow the real-time simulation of realistic cloth and clothing. Adaptive meshes have previously been developed to act as a bridge between low and high polygon meshes, aiming to adaptively exploit variations in the shape of the cloth. The mesh complexity is dynamically increased or refined to balance quality against computational cost during a simulation. A limitation of many approaches is they do not often consider the decimation or coarsening of previously refined areas, or otherwise are not fast enough for real-time applications. A novel edge-based adaptive mesh is developed for the fast incremental refinement and coarsening of a triangular mesh. A mass-spring network is integrated into the mesh permitting the real-time adaptive simulation of cloth, and techniques are developed for the simulation of clothing on an animated character

    Visually pleasing real-time global illumination rendering for fully-dynamic scenes

    Get PDF
    Global illumination (GI) rendering plays a crucial role in the photo-realistic rendering of virtual scenes. With the rapid development of graphics hardware, GI has become increasingly attractive even for real-time applications nowadays. However, the computation of physically-correct global illumination is time-consuming and cannot achieve real-time, or even interactive performance. Although the realtime GI is possible using a solution based on precomputation, such a solution cannot deal with fully-dynamic scenes. This dissertation focuses on solving these problems by introducing visually pleasing real-time global illumination rendering for fully-dynamic scenes. To this end, we develop a set of novel algorithms and techniques for rendering global illumination effects using the graphics hardware. All these algorithms not only result in real-time or interactive performance, but also generate comparable quality to the previous works in off-line rendering. First, we present a novel implicit visibility technique to circumvent expensive visibility queries in hierarchical radiosity by evaluating the visibility implicitly. Thereafter, we focus on rendering visually plausible soft shadows, which is the most important GI effect caused by the visibility determination. Based on the pre-filtering shadowmapping theory, wesuccessively propose two real-time soft shadow mapping methods: "convolution soft shadow mapping" (CSSM) and "variance soft shadow mapping" (VSSM). Furthermore, we successfully apply our CSSM method in computing the shadow effects for indirect lighting. Finally, to explore the GI rendering in participating media, we investigate a novel technique to interactively render volume caustics in the single-scattering participating media.Das Rendern globaler Beleuchtung ist für die fotorealistische Darstellung virtueller Szenen von entscheidender Bedeutung. Dank der rapiden Entwicklung der Grafik-Hardware wird die globale Beleuchtung heutzutage sogar für Echtzeitanwendungen immer attraktiver. Trotz allem ist die Berechnung physikalisch korrekter globaler Beleuchtung zeitintensiv und interaktive Laufzeiten können mit "standard Hardware" noch nicht erzielt werden. Obwohl das Rendering auf der Grundlage von Vorberechnungen in Echtzeit möglich ist, kann ein solcher Ansatz nicht auf voll-dynamische Szenen angewendet werden. Diese Dissertation zielt darauf ab, das Problem der globalen Beleuchtungsberechnung durch Einführung von neuen Techniken für voll-dynamische Szenen in Echtzeit zu lösen. Dazu stellen wir eine Reihe neuer Algorithmen vor, die die Effekte der globaler Beleuchtung auf der Grafik-Hardware berechnen. All diese Algorithmen erzielen nicht nur Echtzeit bzw. interaktive Laufzeiten sondern liefern auch eine Qualität, die mit bisherigen offline Methoden vergleichbar ist. Zunächst präsentieren wir eine neue Technik zur Berechnung impliziter Sichtbarkeit, die aufwändige Sichbarkeitstests in hierarchischen Radiosity-Datenstrukturen vermeidet. Anschliessend stellen wir eine Methode vor, die weiche Schatten, ein wichtiger Effekt für die globale Beleuchtung, in Echtzeit berechnet. Auf der Grundlage der Theorie über vorgefilterten Schattenwurf, zeigen wir nacheinander zwei Echtzeitmethoden zur Berechnung weicher Schattenwürfe: "Convolution Soft Shadow Mapping" (CSSM) und "Variance Soft Shadow Mapping" (VSSM). Darüber hinaus wenden wir unsere CSSM-Methode auch erfolgreich auf den Schatteneffekt in der indirekten Beleuchtung an. Abschliessend präsentieren wir eine neue Methode zum interaktiven Rendern von Volumen-Kaustiken in einfach streuenden, halbtransparenten Medien

    Measuring perceived gloss of rough surfaces

    Get PDF
    This thesis is concerned with the visual perception of glossy rough surfaces, specifically those characterised by 1/fB noise. Computer graphics were used to model these natural looking surfaces, which were generated and animated to provide realistic stimuli for observers. Different methods were employed to investigate the effects of varying surface roughness and reflection model parameters on perceived gloss. We first investigated how the perceived gloss of a matte Lambertian surface varies with RMS roughness. Then we estimated the perceived gloss of moderate RMS height surfaces rendered using a gloss reflection model. We found that adjusting parameters of the gloss reflection model on the moderate RMS height surfaces produces similar levels of gloss to the high RMS height Lambertian surfaces. More realistic stimuli were modelled using improvements in the reflection model, rendering technique, illumination and viewing conditions. In contrast with previous research, a non-monotonic relationship was found between perceived gloss and mesoscale roughness when microscale parameters were held constant. Finally, the joint effect of variations in mesoscale roughness (surface geometry) and microscale roughness (reflection model) on perceived gloss was investigated and tested against conjoint measurement models. It was concluded that perceived gloss of rough surfaces is significantly affected by surface roughness in both mesoscale and microscale and can be described by a full conjoint measurement model
    corecore