136,419 research outputs found

    Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects

    Get PDF
    The realism of a scene basically depends on the quality of the geometry, the illumination and the materials that are used. Whereas many sources for the creation of three-dimensional geometry exist and numerous algorithms for the approximation of global illumination were presented, the acquisition and rendering of realistic materials remains a challenging problem. Realistic materials are very important in computer graphics, because they describe the reflectance properties of surfaces, which are based on the interaction of light and matter. In the real world, an enormous diversity of materials can be found, comprising very different properties. One important objective in computer graphics is to understand these processes, to formalize them and to finally simulate them. For this purpose various analytical models do already exist, but their parameterization remains difficult as the number of parameters is usually very high. Also, they fail for very complex materials that occur in the real world. Measured materials, on the other hand, are prone to long acquisition time and to huge input data size. Although very efficient statistical compression algorithms were presented, most of them do not allow for editability, such as altering the diffuse color or mesostructure. In this thesis, a material representation is introduced that makes it possible to edit these features. This makes it possible to re-use the acquisition results in order to easily and quickly create deviations of the original material. These deviations may be subtle, but also substantial, allowing for a wide spectrum of material appearances. The approach presented in this thesis is not based on compression, but on a decomposition of the surface into several materials with different reflection properties. Based on a microfacette model, the light-matter interaction is represented by a function that can be stored in an ordinary two-dimensional texture. Additionally, depth information, local rotations, and the diffuse color are stored in these textures. As a result of the decomposition, some of the original information is inevitably lost, therefore an algorithm for the efficient simulation of subsurface scattering is presented as well. Another contribution of this work is a novel perception-based simplification metric that includes the material of an object. This metric comprises features of the human visual system, for example trichromatic color perception or reduced resolution. The proposed metric allows for a more aggressive simplification in regions where geometric metrics do not simplif

    Architectural rendering and 3D visualization

    Get PDF
    The following thesis, “Architectural Render and 3D Visualization,” describes the process of creating, rendering, and optimizing an Interior Design using a 3D Engine as the principal tool. The tool used during the development is “Unreal Engine,” which allows rendering and interaction in real-time with the scene. At the end of the process, we can obtain an interactive scene rendered with highquality materials trying to reach a realistic real-time scene by mixing modeling, texturing, and illumination techniques. Furthermore, scripting is contemplated in the project scope, looking to optimize the environment where we will be developing the scene, and developing some tools

    Animated rendering of cardiac model simulations

    Get PDF
    Heart disease has been the leading cause of death both in the world and the United States in the past decade. Computational cardiac modeling and simulation, especially patient-specific cardiac modeling has been recognized as one of the best ways to improve diagnosis of heart disease by providing insights in individual disease characteristics that cannot be obtained by other means. However presenting the results of cardiac simulations to cardiologists in an interactive manner can considerably improve the utility of cardiac models in understanding the heart function. In this work, we have developed virtual reality and animated volume rendering techniques to render the results of cardiac simulations. We have developed a GPU accelerated algorithm that produces time varying voxelized representation of the quantities of interest in a cardiac model, which can then be interactively rendered in real time. We voxelize the different time frames of the analysis model and transfer the time-varying data to the GPU memory using a flat data structure. This technique allows us to visualize and interact with animation in real time. As a proof-of-concept, we test our method on interactively rendering the simulation results of cardiac biomechanics simulations. We also present the timing results on post-processing and rendering two different cardiac IGA at different resolutions. We achieve an interactive frame rate of over 50 fps for all test cases

    Rendering Clouds in Real Time

    Get PDF
    Práce se zabývá algoritmy schopnými zobrazit mraky v reálném čase. Teoretická část popisuje fyzikální princip oblaků a seznamuje s vybranými metodami pro jejich modelování a vykreslování. Cílem praktické části je implementovat jeden z algoritmů, schopný běžet v reálném čase a vyvinout aplikaci, která jej bude demonstrovat.This thesis is about algorithms which render clouds in real time. The theoretical section deals with clouds in real world and also describes some algorithms for modeling and rendering them. The aim of practical section is implement one of these real time algorithms and develop demonstrational application.

    Real time ray tracing of skeletal implicit surfaces

    Get PDF
    Modeling and rendering in real time is usually done via rasterization of polygonal meshes. We present a method to model with skeletal implicit surfaces and an algorithm to ray trace these surfaces in real time in the GPU. Our skeletal representation of the surfaces allows to create smooth models easily that can be seamlessly animated and textured. The ray tracing is performed at interactive frame rate thanks to an acceleration data structure based on a BVH and a kd-tree

    CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer

    Full text link
    Reconstructing personalized animatable head avatars has significant implications in the fields of AR/VR. Existing methods for achieving explicit face control of 3D Morphable Models (3DMM) typically rely on multi-view images or videos of a single subject, making the reconstruction process complex. Additionally, the traditional rendering pipeline is time-consuming, limiting real-time animation possibilities. In this paper, we introduce CVTHead, a novel approach that generates controllable neural head avatars from a single reference image using point-based neural rendering. CVTHead considers the sparse vertices of mesh as the point set and employs the proposed Vertex-feature Transformer to learn local feature descriptors for each vertex. This enables the modeling of long-range dependencies among all the vertices. Experimental results on the VoxCeleb dataset demonstrate that CVTHead achieves comparable performance to state-of-the-art graphics-based methods. Moreover, it enables efficient rendering of novel human heads with various expressions, head poses, and camera views. These attributes can be explicitly controlled using the coefficients of 3DMMs, facilitating versatile and realistic animation in real-time scenarios.Comment: WACV202

    LiveHand: Real-time and Photorealistic Neural Hand Rendering

    Full text link
    The human hand is the main medium through which we interact with our surroundings. Hence, its digitization is of uttermost importance, with direct applications in VR/AR, gaming, and media production amongst other areas. While there are several works for modeling the geometry and articulations of hands, little attention has been dedicated to capturing photo-realistic appearance. In addition, for applications in extended reality and gaming, real-time rendering is critical. In this work, we present the first neural-implicit approach to photo-realistically render hands in real-time. This is a challenging problem as hands are textured and undergo strong articulations with various pose-dependent effects. However, we show that this can be achieved through our carefully designed method. This includes training on a low-resolution rendering of a neural radiance field, together with a 3D-consistent super-resolution module and mesh-guided space canonicalization and sampling. In addition, we show the novel application of a perceptual loss on the image space is critical for achieving photorealism. We show rendering results for several identities, and demonstrate that our method captures pose- and view-dependent appearance effects. We also show a live demo of our method where we photo-realistically render the human hand in real-time for the first time in literature. We ablate all our design choices and show that our design optimizes for both photorealism and rendering speed. Our code will be released to encourage further research in this area.Comment: 11 pages, 8 figure

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Hybrid client-server and P2P network for web-based collaborative 3D design

    Get PDF
    National audienceOur proposed research project is to enable 3D distributed visualization and manipulation involving collaborative effort through the use of web-based technologies. Our project resulted from a wide collaborative application research fields: Computer Aided Design (CAD), Building Information Modeling (BIM) or Product Life Cycle Management (PLM) where design tasks are often performed in teams and need a fluent communication system. The system allows distributed remote assembling in 3D scenes with real-time updates for the users. This paper covers this feature using hybrid networking solution: a client-server architecture (REST) for 3D rendering (WebGL) and data persistence (NoSQL) associated to an automatically built peer-to-peer (P2P) mesh for real-time communication between the clients (WebRTC). The approach is demonstrated through the development of a web-platform prototype focusing on the easy manipulation, fine rendering and light update messages for all participating users. We provide an architecture and a prototype to enable users to design in 3D together in real time with the benefits of web based online collaboration

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost
    • …
    corecore