450 research outputs found

    Approximation of tensor fields on surfaces of arbitrary topology based on local Monge parametrizations

    Full text link
    We introduce a new method, the Local Monge Parametrizations (LMP) method, to approximate tensor fields on general surfaces given by a collection of local parametrizations, e.g.~as in finite element or NURBS surface representations. Our goal is to use this method to solve numerically tensor-valued partial differential equations (PDE) on surfaces. Previous methods use scalar potentials to numerically describe vector fields on surfaces, at the expense of requiring higher-order derivatives of the approximated fields and limited to simply connected surfaces, or represent tangential tensor fields as tensor fields in 3D subjected to constraints, thus increasing the essential number of degrees of freedom. In contrast, the LMP method uses an optimal number of degrees of freedom to represent a tensor, is general with regards to the topology of the surface, and does not increase the order of the PDEs governing the tensor fields. The main idea is to construct maps between the element parametrizations and a local Monge parametrization around each node. We test the LMP method by approximating in a least-squares sense different vector and tensor fields on simply connected and genus-1 surfaces. Furthermore, we apply the LMP method to two physical models on surfaces, involving a tension-driven flow (vector-valued PDE) and nematic ordering (tensor-valued PDE). The LMP method thus solves the long-standing problem of the interpolation of tensors on general surfaces with an optimal number of degrees of freedom.Comment: 16 pages, 6 figure

    Inverse Rendering of Lambertian Surfaces Using Subspace Methods

    Full text link

    Automatic 3D facial modelling with deformable models.

    Get PDF
    Facial modelling and animation has been an active research subject in computer graphics since the 1970s. Due to extremely complex biomechanical structures of human faces and peoples visual familiarity with human faces, modelling and animating realistic human faces is still one of greatest challenges in computer graphics. Since we are so familiar with human faces and very sensitive to unnatural subtle changes in human faces, it usually requires a tremendous amount of artistry and manual work to create a convincing facial model and animation. There is a clear need of developing automatic techniques for facial modelling in order to reduce manual labouring. In order to obtain a realistic facial model of an individual, it is now common to make use of 3D scanners to capture range scans from the individual and then fit a template to the range scans. However, most existing template-fitting methods require manually selected landmarks to warp the template to the range scans. It would be tedious to select landmarks by hand over a large set of range scans. Another way to reduce repeated work is synthesis by reusing existing data. One example is expression cloning, which copies facial expression from one face to another instead of creating them from scratch. This aim of this study is to develop a fully automatic framework for template-based facial modelling, facial expression transferring and facial expression tracking from range scans. In this thesis, the author developed an extension of the iterative closest points (ICP) algorithm, which is able to match a template with range scans in different scales, and a deformable model, which can be used to recover the shapes of range scans and to establish correspondences between facial models. With the registration method and the deformable model, the author proposed a fully automatic approach to reconstructing facial models and textures from range scans without re-quiring any manual interventions. In order to reuse existing data for facial modelling, the author formulated and solved the problem of facial expression transferring in the framework of discrete differential geometry. The author also applied his methods to face tracking for 4D range scans. The results demonstrated the robustness of the registration method and the capabilities of the deformable model. A number of possible directions for future work were pointed out

    Procedural Generation and Rendering of Realistic, Navigable Forest Environments: An Open-Source Tool

    Full text link
    Simulation of forest environments has applications from entertainment and art creation to commercial and scientific modelling. Due to the unique features and lighting in forests, a forest-specific simulator is desirable, however many current forest simulators are proprietary or highly tailored to a particular application. Here we review several areas of procedural generation and rendering specific to forest generation, and utilise this to create a generalised, open-source tool for generating and rendering interactive, realistic forest scenes. The system uses specialised L-systems to generate trees which are distributed using an ecosystem simulation algorithm. The resulting scene is rendered using a deferred rendering pipeline, a Blinn-Phong lighting model with real-time leaf transparency and post-processing lighting effects. The result is a system that achieves a balance between high natural realism and visual appeal, suitable for tasks including training computer vision algorithms for autonomous robots and visual media generation.Comment: 14 pages, 11 figures. Submitted to Computer Graphics Forum (CGF). The application and supporting configuration files can be found at https://github.com/callumnewlands/ForestGenerato

    GPU-Based One-Dimensional Convolution for Real-Time Spatial Sound Generation

    Get PDF
    Incorporating spatialized (3D) sound cues in dynamic and interactive videogames and immersive virtual environment applications is beneficial for a number of reasons, ultimately leading to an increase in presence and immersion. Despite the benefits of spatial sound cues, they are often overlooked in videogames and virtual environments where typically, emphasis is placed on the visual cues. Fundamental to the generation of spatial sound is the one-dimensional convolution operation which is computationally expensive, not lending itself to such real-time, dynamic applications. Driven by the gaming industry and the great emphasis placed on the visual sense, consumer computer graphics hardware, and the graphics processing unit (GPU) in particular, has greatly advanced in recent years, even outperforming the computational capacity of CPUs. This has allowed for real-time, interactive realistic graphics-based applications on typical consumer- level PCs. Given the widespread use and availability of computer graphics hardware and the similarities that exist between the fields of spatial audio and image synthesis, here we describe the development of a GPU-based, one-dimensional convolution algorithm whose efficiency is superior to the conventional CPU-based convolution method. The primary purpose of the developed GPU-based convolution method is the computationally efficient generation of real- time spatial audio for dynamic and interactive videogames and virtual environments

    Direct and gestural interaction with relief: A 2.5D shape display

    Get PDF
    Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of freehand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system

    A Positive-definite Cut-cell Method for Strong Two-way Coupling Between Fluids and Deformable Bodies

    Get PDF
    © ACM, 2017. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Zarifi, O., & Batty, C. (2017). A Positive-definite Cut-cell Method for Strong Two-way Coupling Between Fluids and Deformable Bodies. In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation (p. 7:1–7:11). New York, NY, USA: ACM. https://doi.org/10.1145/3099564.3099572We present a new approach to simulation of two-way coupling between inviscid free surface fluids and deformable bodies that exhibits several notable advantages over previous techniques. By fully incorporating the dynamics of the solid into pressure projection, we simultaneously handle fluid incompressibility and solid elasticity and damping. Thanks to this strong coupling, our method does not suffer from instability, even in very taxing scenarios. Furthermore, use of a cut-cell discretization methodology allows us to accurately apply proper free-slip boundary conditions at the exact solid-fluid interface. Consequently, our method is capable of correctly simulating inviscid tangential flow, devoid of grid artefacts or artificial sticking. Lastly, we present an efficient algebraic transformation to convert the indefinite coupled pressure projection system into a positive-definite form. We demonstrate the efficacy of our proposed method by simulating several interesting scenarios, including a light bath toy colliding with a collapsing column of water, liquid being dropped onto a deformable platform, and a partially liquid-filled deformable elastic sphere bouncing.Natural Sciences and Engineering Research Council of Canad

    Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses

    Get PDF
    Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow samples observed in the camera frame, this method efficiently recovers deformed shadow appearance. In this manuscript, we introduce a light-estimation approach that enables light-source detection using flat Fresnel lenses that allow this method to work without a set of pre-established conditions. We extend the adeptness of this approach by handling scenarios with multiple receiver surfaces and a non-grounded occluder with high accuracy. Results are presented on a range of objects, deformations, and illumination conditions in real-time Augmented Reality (AR) on a mobile device. We demonstrate the practical application of the method in generating otherwise laborious in-betweening frames for 3D printed stop-motion animatio
    • …
    corecore