346 research outputs found

    Concrete Computation of Global Illumination Using Structured Sampling

    Get PDF
    A new methodology is presented for the computation of global illumination using structured sampling. Analytical/numerical solutions for illumination are developed for simple lighting configurations. These solutions are subsequently used to generate accurate reference images. The structured sampling solution for global illumination is then discussed, comprising sample placement for illumination calculation, reconstruction for light transfer and finally resampling and filtering of illumination samples for display. A first approximation to this technique is presented using a priori placement of samples, irregular polygon reflectors, grid resampling and a conical filter for display. The new algorithm is evaluated for image quality, and compared to the traditional radiosity-based approach. These first results show that the structured sampling solution yields significant computational savings while maintaining high image quality. 1. Goals of the Approach The calculation of global illumination is inherently complex, even for environments that are simpl

    Improving NeRF Quality by Progressive Camera Placement for Unrestricted Navigation in Complex Environments

    Full text link
    Neural Radiance Fields, or NeRFs, have drastically improved novel view synthesis and 3D reconstruction for rendering. NeRFs achieve impressive results on object-centric reconstructions, but the quality of novel view synthesis with free-viewpoint navigation in complex environments (rooms, houses, etc) is often problematic. While algorithmic improvements play an important role in the resulting quality of novel view synthesis, in this work, we show that because optimizing a NeRF is inherently a data-driven process, good quality data play a fundamental role in the final quality of the reconstruction. As a consequence, it is critical to choose the data samples -- in this case the cameras -- in a way that will eventually allow the optimization to converge to a solution that allows free-viewpoint navigation with good quality. Our main contribution is an algorithm that efficiently proposes new camera placements that improve visual quality with minimal assumptions. Our solution can be used with any NeRF model and outperforms baselines and similar work

    Tightly-Coupled Multiprocessing for a Global Illumination Algorithm

    Get PDF
    {dret | elf} @ dgp.toronto.edu A prevailing trend in computer graphics is the demand for increasingly realistic global illumination models and algorithms. Despite the fact that the computational power of uniprocessors is increasing, it is clear that much greater computational power is required to achieve satisfactory throughput. The obvious next step is to employ parallel processing. The advent of affordable, tightly-coupled multiprocessors makes such an approach widely available for the first time. We propose a tightly-coupled parallel decomposition of FIAT, a global illumination algorithm, based on space subdivision and power balancing, that we have recently developed. This algorithm is somewhat ambitious, and severely strains existing uniprocessor environments. We discuss techniques for reducing memory contention and maximising parallelism. We also present empirical data on the actual performance of our parallel solution. Since the model of parallel computation that we have employed is likely to persist for quite some time, our techniques are applicable to other algorithms based on space subdivision. 1

    Flexible Point-Based Rendering on Mobile Devices

    Get PDF
    Point-based rendering is a compact and efficient means of displayingcomplex geometry. For mobile devices which typically have limited CPU orfloating point speed, limited memory, no graphics hardware and a smalldisplay, a hierarchical packed point based representation of objectsis particularly well adapted. We introduce -grids, which are ageneralization of previous octree based representations and analyse theirmemory and rendering efficiency. By storing intermediate node attributes,our structure allows flexible rendering, permitting efficient local imagerefinement, required for example when zooming into very complex scenes.We also introduce a novel and efficient one-pass shadow mapping algorithm usingthis data structure. We show an implementation of our method on a PDA,which can render objects sampled by 1.3 million points at 2.1 frames per second;the model was originally made up of 4.7 million polygons

    FreeStyleGAN: Free-view Editable Portrait Rendering with the Camera Manifold

    Get PDF
    International audienceCurrent Generative Adversarial Networks (GANs) produce photorealisticrenderings of portrait images. Embedding real images into the latent spaceof such models enables high-level image editing. While recent methodsprovide considerable semantic control over the (re-)generated images, theycan only generate a limited set of viewpoints and cannot explicitly controlthe camera. Such 3D camera control is required for 3D virtual and mixedreality applications. In our solution, we use a few images of a face to perform3D reconstruction, and we introduce the notion of the GAN camera manifold,the key element allowing us to precisely define the range of images that theGAN can reproduce in a stable manner. We train a small face-specific neuralimplicit representation network to map a captured face to this manifoldand complement it with a warping scheme to obtain free-viewpoint novel-view synthesis. We show how our approach ś due to its precise cameracontrol ś enables the integration of a pre-trained StyleGAN into standard 3Drendering pipelines, allowing e.g., stereo rendering or consistent insertionof faces in synthetic 3D environments. Our solution proposes the first trulyfree-viewpoint rendering of realistic faces at interactive rates, using onlya small number of casual photos as input, while simultaneously allowingsemantic editing capabilities, such as facial expression or lighting changes

    3D Gaussian Splatting for Real-Time Radiance Field Rendering

    Full text link
    Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.Comment: https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting

    Free-viewpoint Indoor Neural Relighting from Multi-view Stereo

    Get PDF
    We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation. Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials. We start with multiple images of the scene and a 3D mesh obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is well-explained as the sum of a view-independent diffuse component and a view-dependent glossy term concentrated around the mirror reflection direction. We design a convolutional network around input feature maps that facilitate learning of an implicit representation of scene materials and illumination, enabling both relighting and free-viewpoint navigation. We generate these input maps by exploiting the best elements of both image-based and physically-based rendering. We sample the input views to estimate diffuse scene irradiance, and compute the new illumination caused by user-specified light sources using path tracing. To facilitate the network's understanding of materials and synthesize plausible glossy reflections, we reproject the views and compute mirror images. We train the network on a synthetic dataset where each scene is also reconstructed with MVS. We show results of our algorithm relighting real indoor scenes and performing free-viewpoint navigation with complex and realistic glossy reflections, which so far remained out of reach for view-synthesis techniques

    Interactive Sampling and Rendering for Complex and Procedural Geometry

    Get PDF
    International audienceWe present a new sampling method for procedural and complex geometries, which allows interactive point-based modeling and rendering of such scenes. For a variety of scenes, object-space point sets can be generated rapidly, resulting in a sufficiently dense sampling of the final image. We present an integrated approach that exploits the simplicity of the point primitive. For procedural objects a hierarchical sampling scheme is presented that adapts sample densities locally according to the projected size in the image. Dynamic procedural objects and interactive user manipulation thus become possible. The same scheme is also applied to on-the-fly generation and rendering of terrains, and enables the use of an efficient occlusion culling algorithm. Furthermore, by using points the system enables interactive rendering and simple modification of complex objects (e.g., trees). For display, hardware-accelerated 3-D point rendering is used, but our sampling method can be used by any other point-rendering approach

    Can VR be Useful and Usable in Real-World Contexts? Observations from the Application and Evaluation of VR in Realistic Usage Conditions

    Get PDF
    International audienceThis paper presents our observations from the use of high-end projection-based VR in different real-world settings, with practitioners but also novice users that do not normally use VR in their everyday practice. We developed two applications for two different content domains and present case studies of actual experiences with professionals and students who used these as part of their work or during their museum visit. Emphasis is given on usability issues and evaluation of effectiveness, as well as on our thoughts on the efficacy of the long term deployment of VR under realistic usage conditions, especially when the technology becomes mundane and the content takes precedence over the display medium. We will present an overall assessment of our experience, on issues relating to usability and user satisfaction with VR in real-world contexts
    corecore