200,135 research outputs found

    Three architectures for volume rendering

    Get PDF
    Volume rendering is a key technique in scientific visualization that lends itself to significant exploitable parallelism. The high computational demands of real-time volume rendering and continued technological advances in the area of VLSI give impetus to the development of special-purpose volume rendering architectures. This paper presents and characterizes three recently developed volume rendering engines which are based on the ray-casting method. A taxonomy of the algorithmic variants of ray-casting and details of each ray-casting architecture are discussed. The paper then compares the machine features and provides an outlook on future developments in the area of volume rendering hardware

    HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting

    Full text link
    3D head animation has seen major quality and runtime improvements over the last few years, particularly empowered by the advances in differentiable rendering and neural radiance fields. Real-time rendering is a highly desirable goal for real-world applications. We propose HeadGaS, the first model to use 3D Gaussian Splats (3DGS) for 3D head reconstruction and animation. In this paper we introduce a hybrid model that extends the explicit representation from 3DGS with a base of learnable latent features, which can be linearly blended with low-dimensional parameters from parametric head models to obtain expression-dependent final color and opacity values. We demonstrate that HeadGaS delivers state-of-the-art results in real-time inference frame rates, which surpasses baselines by up to ~2dB, while accelerating rendering speed by over x10

    A Scalable Tile Map Service for Distributing Dynamic Choropleth Maps

    Get PDF
    In this paper we propose a solution to several key limitations of current web based mapping systems: slow rendering speeds and the restriction of online map viewing to a small number of areal units as well as a limited number of users. Our approach is implemented as a Scalable Tile Map Service that distributes dynamic choropleth maps in real-time through a new caching methodology. This new Map Service lays the foundation for advances in web based applications reliant on dynamic map rendering such as emergency management systems and interactive exploratory spatial data analysis. We present the results of an empirical illustration in which this new methodology is used to facilitate collaborative decision making by visualizing spatial outcomes of simulation results on the fly.

    Evaluation of 3D Voxel Rendering Algorithms for Real-Time Interaction on a SIMD Graphics Processor

    Get PDF
    The display of three-dimensional medical data is becoming more common, but current hardware and image rendering algorithms do not generally allow real-time interaction with the image by the user. Real-time interactions, such as image rotation, utilize the motion processing capabilities of the human visual system, allowing a better understanding of the structures being imaged. Recent advances in general purpose graphics display equipment could make real-time interaction feasible in clinical setting. We have evaluated the capabilities of one type of advanced display architecture, the PIXAR Imaging Computer, for real-time interaction while displaying three-dimensional medical data as two-dimensional projections. It was discovered during this investigation that most suitable algorithms for implementation were based on the rendering of voxel rather than surface data. Two voxel-based techniques, back-to-front and front-to-back rendering produced acceptable, but not real-time performance. The quality of the images produced was not high, but allowed the determination of an image orientation which could then be used by a later high-quality rendering technique. Two conclusions were reached: first, the current performance of display hardware may allow acceptable interactive performance and produce high-quality images if a scheme of adaptive refinement is used wherein successively higher quality images are generated for the user. Second, the correct algorithm to use for fast rendering of volume data is highly dependent upon the architecture of the display processor, and in particular upon the ability of the processor to randomly access image data. If the processor is constrained to sequential or near sequential access to the voxel data, the choice of algorithms and the utilization of parallel processing is severely limited

    Envisioning a Next Generation Extended Reality Conferencing System with Efficient Photorealistic Human Rendering

    Full text link
    Meeting online is becoming the new normal. Creating an immersive experience for online meetings is a necessity towards more diverse and seamless environments. Efficient photorealistic rendering of human 3D dynamics is the core of immersive meetings. Current popular applications achieve real-time conferencing but fall short in delivering photorealistic human dynamics, either due to limited 2D space or the use of avatars that lack realistic interactions between participants. Recent advances in neural rendering, such as the Neural Radiance Field (NeRF), offer the potential for greater realism in metaverse meetings. However, the slow rendering speed of NeRF poses challenges for real-time conferencing. We envision a pipeline for a future extended reality metaverse conferencing system that leverages monocular video acquisition and free-viewpoint synthesis to enhance data and hardware efficiency. Towards an immersive conferencing experience, we explore an accelerated NeRF-based free-viewpoint synthesis algorithm for rendering photorealistic human dynamics more efficiently. We show that our algorithm achieves comparable rendering quality while performing training and inference 44.5% and 213% faster than state-of-the-art methods, respectively. Our exploration provides a design basis for constructing metaverse conferencing systems that can handle complex application scenarios, including dynamic scene relighting with customized themes and multi-user conferencing that harmonizes real-world people into an extended world.Comment: Accepted to CVPR 2023 ECV Worksho

    CWIPC-SXR: Point cloud dynamic human dataset for Social XR

    Get PDF
    Real-time, immersive telecommunication systems are quickly becoming a reality, thanks to the advances in acquisition, transmission, and rendering technologies. Point clouds in particular serve as a promising representation in these type of systems, offering photorealistic rendering capabilities with low complexity. Further development of transmission, coding, and quality evaluation algorithms, though, is currently hindered by the lack of publicly available datasets that represent realistic scenarios of remote communication between people in real-time.

    Representing Volumetric Videos as Dynamic MLP Maps

    Full text link
    This paper introduces a novel representation of volumetric videos for real-time view synthesis of dynamic scenes. Recent advances in neural scene representations demonstrate their remarkable capability to model and render complex static scenes, but extending them to represent dynamic scenes is not straightforward due to their slow rendering speed or high storage cost. To solve this problem, our key idea is to represent the radiance field of each frame as a set of shallow MLP networks whose parameters are stored in 2D grids, called MLP maps, and dynamically predicted by a 2D CNN decoder shared by all frames. Representing 3D scenes with shallow MLPs significantly improves the rendering speed, while dynamically predicting MLP parameters with a shared 2D CNN instead of explicitly storing them leads to low storage cost. Experiments show that the proposed approach achieves state-of-the-art rendering quality on the NHR and ZJU-MoCap datasets, while being efficient for real-time rendering with a speed of 41.7 fps for 512×512512 \times 512 images on an RTX 3090 GPU. The code is available at https://zju3dv.github.io/mlp_maps/.Comment: Accepted to CVPR 2023. The first two authors contributed equally to this paper. Project page: https://zju3dv.github.io/mlp_maps

    LightSpeed: Light and Fast Neural Light Fields on Mobile Devices

    Full text link
    Real-time novel-view image synthesis on mobile devices is prohibitive due to the limited computational power and storage. Using volumetric rendering methods, such as NeRF and its derivatives, on mobile devices is not suitable due to the high computational cost of volumetric rendering. On the other hand, recent advances in neural light field representations have shown promising real-time view synthesis results on mobile devices. Neural light field methods learn a direct mapping from a ray representation to the pixel color. The current choice of ray representation is either stratified ray sampling or Plucker coordinates, overlooking the classic light slab (two-plane) representation, the preferred representation to interpolate between light field views. In this work, we find that using the light slab representation is an efficient representation for learning a neural light field. More importantly, it is a lower-dimensional ray representation enabling us to learn the 4D ray space using feature grids which are significantly faster to train and render. Although mostly designed for frontal views, we show that the light-slab representation can be further extended to non-frontal scenes using a divide-and-conquer strategy. Our method offers superior rendering quality compared to previous light field methods and achieves a significantly improved trade-off between rendering quality and speed.Comment: Project Page: http://lightspeed-r2l.github.io/ . Add camera ready versio

    Efficient Methods for Computational Light Transport

    Get PDF
    En esta tesis presentamos contribuciones sobre distintos retos computacionales relacionados con transporte de luz. Los algoritmos que utilizan información sobre el transporte de luz están presentes en muchas aplicaciones de hoy en día, desde la generación de efectos visuales, a la detección de objetos en tiempo real. La luz es una valiosa fuente de información que nos permite entender y representar nuestro entorno, pero obtener y procesar esta información presenta muchos desafíos debido a la complejidad de las interacciones entre la luz y la materia. Esta tesis aporta contribuciones en este tema desde dos puntos de vista diferentes: algoritmos en estado estacionario, en los que se asume que la velocidad de la luz es infinita; y algoritmos en estado transitorio, que tratan la luz no solo en el dominio espacial, sino también en el temporal. Nuestras contribuciones en algoritmos estacionarios abordan problemas tanto en renderizado offline como en tiempo real. Nos enfocamos en la reducción de varianza para métodos offline,proponiendo un nuevo método para renderizado eficiente de medios participativos. En renderizado en tiempo real, abordamos las limitacionesde consumo de batería en dispositivos móviles proponiendo un sistema de renderizado que incrementa la eficiencia energética en aplicaciones gráficas en tiempo real. En el transporte de luz transitorio, formalizamos la simulación de este tipo transporte en este nuevo dominio, y presentamos nuevos algoritmos y métodos para muestreo eficiente para render transitorio. Finalmente, demostramos la utilidad de generar datos en este dominio, presentando un nuevo método para corregir interferencia multi-caminos en camaras Timeof- Flight, un problema patológico en el procesamiento de imágenes transitorias.n this thesis we present contributions to different challenges of computational light transport. Light transport algorithms are present in many modern applications, from image generation for visual effects to real-time object detection. Light is a rich source of information that allows us to understand and represent our surroundings, but obtaining and processing this information presents many challenges due to its complex interactions with matter. This thesis provides advances in this subject from two different perspectives: steady-state algorithms, where the speed of light is assumed infinite, and transient-state algorithms, which deal with light as it travels not only through space but also time. Our steady-state contributions address problems in both offline and real-time rendering. We target variance reduction in offline rendering by proposing a new efficient method for participating media rendering. In real-time rendering, we target energy constraints of mobile devices by proposing a power-efficient rendering framework for real-time graphics applications. In transient-state we first formalize light transport simulation under this domain, and present new efficient sampling methods and algorithms for transient rendering. We finally demonstrate the potential of simulated data to correct multipath interference in Time-of-Flight cameras, one of the pathological problems in transient imaging.<br /

    DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination

    Full text link
    In this paper we are extracting surface reflectance and natural environmental illumination from a reflectance map, i.e. from a single 2D image of a sphere of one material under one illumination. This is a notoriously difficult problem, yet key to various re-rendering applications. With the recent advances in estimating reflectance maps from 2D images their further decomposition has become increasingly relevant. To this end, we propose a Convolutional Neural Network (CNN) architecture to reconstruct both material parameters (i.e. Phong) as well as illumination (i.e. high-resolution spherical illumination maps), that is solely trained on synthetic data. We demonstrate that decomposition of synthetic as well as real photographs of reflectance maps, both in High Dynamic Range (HDR), and, for the first time, on Low Dynamic Range (LDR) as well. Results are compared to previous approaches quantitatively as well as qualitatively in terms of re-renderings where illumination, material, view or shape are changed.Comment: Stamatios Georgoulis and Konstantinos Rematas contributed equally to this wor
    corecore