125,563 research outputs found

    Object-based Illumination Estimation with Rendering-aware Neural Networks

    Full text link
    We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas. Conventional inverse rendering is too computationally demanding for real-time applications, and the performance of purely learning-based techniques may be limited by the meager input data available from individual objects. To address these issues, we propose an approach that takes advantage of physical principles from inverse rendering to constrain the solution, while also utilizing neural networks to expedite the more computationally expensive portions of its processing, to increase robustness to noisy input data as well as to improve temporal and spatial stability. This results in a rendering-aware system that estimates the local illumination distribution at an object with high accuracy and in real time. With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene, leading to improved realism.Comment: ECCV 202

    Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality

    Full text link
    Real-time occlusion handling is a major problem in outdoor mixed reality system because it requires great computational cost mainly due to the complexity of the scene. Using only segmentation, it is difficult to accurately render a virtual object occluded by complex objects such as trees, bushes etc. In this paper, we propose a novel occlusion handling method for real-time, outdoor, and omni-directional mixed reality system using only the information from a monocular image sequence. We first present a semantic segmentation scheme for predicting the amount of visibility for different type of objects in the scene. We also simultaneously calculate a foreground probability map using depth estimation derived from optical flow. Finally, we combine the segmentation result and the probability map to render the computer generated object and the real scene using a visibility-based rendering method. Our results show great improvement in handling occlusions compared to existing blending based methods

    On-the-Fly Power-Aware Rendering

    Get PDF
    Power saving is a prevailing concern in desktop computers and, especially, in battery-powered devices such as mobile phones. This is generating a growing demand for power-aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real-time power-efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera-view space, nor Pareto curves to explore the vast power-error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings

    LIME: Live Intrinsic Material Estimation

    Get PDF
    We present the first end to end approach for real time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill posed inverse rendering problem using recent advances in image to image translation techniques based on deep convolutional encoder decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real world image formation and obtains intermediate results even during run time. The estimation of material parameters at real time frame rates enables exciting mixed reality applications, such as seamless illumination consistent integration of virtual objects into real world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.Comment: 17 pages, Spotlight paper in CVPR 201

    VIRIM, A Real-Time Volume Rendering System for Medicine

    Full text link
    VIRIM, a real-time direct volume rendering system is presented. The system is freely programmable and supports models like a-compositing, front-to-back (back-to-front) techniques, and the slab method. The hardware system is divided into two units, a geometry unit and a raycast unit. The geometry unit performs resampling and gradient estimation and is mapped directly into hardware. It supports different resampling filters in order to reduce resampling artifacts. The raycast unit consists of 16 digital signal processors that perform the programmable ray- casting. The software of VIRIM is layered and provides manipulation tools for the data during real-time visualization like arbitrary gray-value mapping and setting the region of interest. The system is under test and will be available as prototype in 1995

    Real-time High Resolution Fusion of Depth Maps on GPU

    Full text link
    A system for live high quality surface reconstruction using a single moving depth camera on a commodity hardware is presented. High accuracy and real-time frame rate is achieved by utilizing graphics hardware computing capabilities via OpenCL and by using sparse data structure for volumetric surface representation. Depth sensor pose is estimated by combining serial texture registration algorithm with iterative closest points algorithm (ICP) aligning obtained depth map to the estimated scene model. Aligned surface is then fused into the scene. Kalman filter is used to improve fusion quality. Truncated signed distance function (TSDF) stored as block-based sparse buffer is used to represent surface. Use of sparse data structure greatly increases accuracy of scanned surfaces and maximum scanning area. Traditional GPU implementation of volumetric rendering and fusion algorithms were modified to exploit sparsity to achieve desired performance. Incorporation of texture registration for sensor pose estimation and Kalman filter for measurement integration improved accuracy and robustness of scanning process
    corecore