4,489 research outputs found

    Rendering HDR images

    Get PDF
    Color imaging systems are continuously improving, and have now improved to the point of capturing high dynamic range scenes. Unfortunately most commercially available color display devices, such as CRTs and LCDs, are limited in their dynamic range. It is necessary to tone-map, or render, the high dynamic range images in order to display them onto a lower dynamic range device. This paper describes the use of an image appearance model, iCAM, to render high dynamic range images for display. Image appearance models have greater flexibility over dedicated tone-scaling algorithms as they are designed to predict how images perceptually appear, and not designed for the singular purpose of rendering. In this paper we discuss the use of an image appearance framework, and describe specific implementation details for using that framework to render high dynamic range images

    IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images

    Full text link
    While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting advanced applications like material editing, relighting, and virtual object insertion. The reconstruction of physically based material properties and lighting via inverse rendering promises to enable such applications. However, most inverse rendering techniques require high dynamic range (HDR) images as input, a setting that is inaccessible to most users. We present a method that recovers the physically based material properties and spatially-varying HDR lighting of a scene from multi-view, low-dynamic-range (LDR) images. We model the LDR image formation process in our inverse rendering pipeline and propose a novel optimization strategy for material, lighting, and a camera response model. We evaluate our approach with synthetic and real scenes compared to the state-of-the-art inverse rendering methods that take either LDR or HDR input. Our method outperforms existing methods taking LDR images as input, and allows for highly realistic relighting and object insertion.Comment: Project Website: https://irisldr.github.io

    Virtual Home Staging: Inverse Rendering and Editing an Indoor Panorama under Natural Illumination

    Full text link
    We propose a novel inverse rendering method that enables the transformation of existing indoor panoramas with new indoor furniture layouts under natural illumination. To achieve this, we captured indoor HDR panoramas along with real-time outdoor hemispherical HDR photographs. Indoor and outdoor HDR images were linearly calibrated with measured absolute luminance values for accurate scene relighting. Our method consists of three key components: (1) panoramic furniture detection and removal, (2) automatic floor layout design, and (3) global rendering with scene geometry, new furniture objects, and a real-time outdoor photograph. We demonstrate the effectiveness of our workflow in rendering indoor scenes under different outdoor illumination conditions. Additionally, we contribute a new calibrated HDR (Cali-HDR) dataset that consists of 137 calibrated indoor panoramas and their associated outdoor photographs

    High Dynamic Range Image Watermarking Robust Against Tone-Mapping Operators

    Get PDF
    High dynamic range (HDR) images represent the future format for digital images since they allow accurate rendering of a wider range of luminance values. However, today special types of preprocessing, collectively known as tone-mapping (TM) operators, are needed to adapt HDR images to currently existing displays. Tone-mapped images, although of reduced dynamic range, have nonetheless high quality and hence retain some commercial value. In this paper, we propose a solution to the problem of HDR image watermarking, e.g., for copyright embedding, that should survive TM. Therefore, the requirements imposed on the watermark encompass imperceptibility, a certain degree of security, and robustness to TM operators. The proposed watermarking system belongs to the blind, detectable category; it is based on the quantization index modulation (QIM) paradigm and employs higher order statistics as a feature. Experimental analysis shows positive results and demonstrates the system effectiveness with current state-of-art TM algorithms

    Guest editorial: high dynamic range imaging

    Get PDF
    High Dynamic Range (HDR) imagery is a step-change in imaging technology that is not limited to the 8-bits per pixel for each color channel that traditional or low-dynamic range digital images have been constrained to. These restrictions have meant that the current and relatively novel imaging technologies including stereoscopic, HD and ultraHD imaging do not provide an accurate representation of the lighting available in a real world environment. HDR technology has enabled the capture, storage, handling and display of content that supports real world luminance and facilitated the use of rendering methods in special effects, video games and advertising via novel rendering methods such as image-based lighting; it is also compatible with the other imaging methods and will certainly be a requirement of future high-fidelity imaging format specifications. However, HDR still has challenges to overcome before it can become a fully fledged commercially successful technology. This special issue goes someway in to rectify any limitations and also shines a light on future potential uses and directions of HDR

    MAIR: Multi-view Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation

    Full text link
    We propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, a SVBRDF, and 3D spatially-varying lighting. Because multi-view images provide a variety of information about the scene, multi-view images in object-level inverse rendering have been taken for granted. However, owing to the absence of multi-view HDR synthetic dataset, scene-level inverse rendering has mainly been studied using single-view image. We were able to successfully perform scene-level inverse rendering using multi-view images by expanding OpenRooms dataset and designing efficient pipelines to handle multi-view images, and splitting spatially-varying lighting. Our experiments show that the proposed method not only achieves better performance than single-view-based methods, but also achieves robust performance on unseen real-world scene. Also, our sophisticated 3D spatially-varying lighting volume allows for photorealistic object insertion in any 3D location.Comment: Accepted by CVPR 2023; Project Page is https://bring728.github.io/mair.project
    • …
    corecore