2,240 research outputs found

    Tone mapping for high dynamic range images

    Get PDF
    Tone mapping is an essential step for the reproduction of "nice looking" images. It provides the mapping between the luminances of the original scene to the output device's display values. When the dynamic range of the captured scene is smaller or larger than that of the display device, tone mapping expands or compresses the luminance ratios. We address the problem of tone mapping high dynamic range (HDR) images to standard displays (CRT, LCD) and to HDR displays. With standard displays, the dynamic range of the captured HDR scene must be compressed significantly, which can induce a loss of contrast resulting in a loss of detail visibility. Local tone mapping operators can be used in addition to the global compression to increase the local contrast and thus improve detail visibility, but this tends to create artifacts. We developed a local tone mapping method that solves the problems generally encountered by local tone mapping algorithms. Namely, it does not create halo artifacts, nor graying-out of low contrast areas, and provides good color rendition. We then investigated specifically the rendition of color and confirmed that local tone mapping algorithms must be applied to the luminance channel only. We showed that the correlation between luminance and chrominance plays a role in the appearance of the final image but a perfect decorrelation is not necessary. Recently developed HDR monitors enable the display of HDR images with hardly any compression of their dynamic range. The arrival of these displays on the market create the need for new tone mapping algorithms. In particular, legacy images that were mapped to SDR displays must be re-rendered to HDR displays, taking best advantage of the increase in dynamic range. This operation can be seen as the reverse of the tone mapping to SDR. We propose a piecewise linear tone scale function that enhances the brightness of specular highlights so that the sensation of naturalness is improved. Our tone scale algorithm is based on the segmentation of the image into its diffuse and specular components as well as on the range of display luminance that is allocated to the specular component and the diffuse component, respectively. We performed a psychovisual experiment to validate the benefit of our tone scale. The results showed that, with HDR displays, allocating more luminance range to the specular component than what was allocated in the image rendered to SDR displays provides more natural looking images

    Gloss Management for Consistent Reproduction of Real and Virtual Objects

    Get PDF
    A good match of material appearance between real-world objects and their digital on-screen representations is critical for many applications such as fabrication, design, and e-commerce. However, faithful appearance reproduction is challenging, especially for complex phenomena, such as gloss. In most cases, the view-dependent nature of gloss and the range of luminance values required for reproducing glossy materials exceeds the current capabilities of display devices. As a result, appearance reproduction poses significant problems even with accurately rendered images. This paper studies the gap between the gloss perceived from real-world objects and their digital counterparts. Based on our psychophysical experiments on a wide range of 3D printed samples and their corresponding photographs, we derive insights on the influence of geometry, illumination, and the display’s brightness and measure the change in gloss appearance due to the display limitations. Our evaluation experiments demonstrate that using the prediction to correct material parameters in a rendering system improves the match of gloss appearance between real objects and their visualization on a display device

    Perceptual Modeling and Reproduction of Gloss

    Get PDF
    The reproduction of gloss on displays is generally not based on perception and as a consequence does not guarantee the best visualization of a real material. The reproduction is composed of four different steps: measurement, modeling, rendering, and display. The minimum number of measurements required to approximate a real material is unknown. The error metrics used to approximate measurements with analytical BRDF models are not based on perception, and the best visual approximation is not always obtained. Finally, the gloss perception difference between real objects and objects seen on displays has not sufficiently been studied and might be influencing the observer judgement. This thesis proposes a systematic, scalable, and perceptually based workflow to represent real materials on displays. First, the gloss perception difference between real objects and objects seen on displays was studied. Second, the perceptual performance of the error metrics currently in use was evaluated. Third, a projection into a perceptual gloss space was defined, enabling the computation of a perceptual gloss distance measure. Fourth, the uniformity of the gloss space was improved by defining a new gloss difference equation. Finally, a systematic, scalable, and perceptually based workflow was defined using cost-effective instruments

    Dynamic Display of BRDFs

    Get PDF
    This paper deals with the challenge of physically displaying reflectance, i.e., the appearance of a surface and its variation with the observer position and the illuminating environment. This is commonly described by the bidirectional reflectance distribution function (BRDF). We provide a catalogue of criteria for the display of BRDFs, and sketch a few orthogonal approaches to solving the problem in an optically passive way. Our specific implementation is based on a liquid surface, on which we excite waves in order to achieve a varying degree of anisotropic roughness. The resulting probability density function of the surface normal is shown to follow a Gaussian distribution similar to most established BRDF models

    Vision technology/algorithms for space robotics applications

    Get PDF
    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed

    The Computation of Surface Lightness in Simple and Complex Scenes

    Get PDF
    The present thesis examined how reflectance properties and the complexity of surface mesostructure (small-scale surface relief) influence perceived lightness in centresurround displays. Chapters 2 and 3 evaluated the role of surface relief, gloss, and interreflections on lightness constancy, which was examined across changes in background albedo and illumination level. For surfaces with visible mesostructure (“rocky” surfaces), lightness constancy across changes in background albedo was better for targets embedded in glossy versus matte surfaces. However, this improved lightness constancy for gloss was not observed when illumination varied. Control experiments compared the matte and glossy rocky surrounds to two control displays, which matched either pixel histograms or a phase-scrambled power spectrum. Lightness constancy was improved for rocky glossy displays over the histogram-matched displays, but not compared to phase-scrambled variants of these images with equated power spectrums. The results were similar for surfaces rendered with 1, 2, 3 and 4 interreflections. These results suggest that lightness perception in complex centre-surround displays can be explained by the distribution of contrast across space and scale, independently of explicit information about surface shading or specularity. The results for surfaces without surface relief (“homogeneous” surfaces) differed qualitatively to rocky surfaces, exhibiting abrupt steps in perceived lightness at points at which the targets transitioned from being increments to decrements. Chapter 4 examined whether homogeneous displays evoke more complex mid-level representations similar to conditions of transparency. Matching target lightness in a homogeneous display to that in a textured or rocky display required varying both lightness and transmittance of the test patch on the textured display to obtain the most satisfactory matches. However, transmittance was only varied to match the contrast of targets against homogeneous surrounds, and not to explicitly match the amount of transparency perceived in the displays. The results suggest perceived target-surround edge contrast differs between homogeneous and textured displays. Varying the mid-level property of transparency in textured displays provides a natural means for equating both target lightness and the unique appearance of the edge contrast in homogeneous displays

    The Computation of Surface Lightness in Simple and Complex Scenes

    Get PDF
    The present thesis examined how reflectance properties and the complexity of surface mesostructure (small-scale surface relief) influence perceived lightness in centresurround displays. Chapters 2 and 3 evaluated the role of surface relief, gloss, and interreflections on lightness constancy, which was examined across changes in background albedo and illumination level. For surfaces with visible mesostructure (“rocky” surfaces), lightness constancy across changes in background albedo was better for targets embedded in glossy versus matte surfaces. However, this improved lightness constancy for gloss was not observed when illumination varied. Control experiments compared the matte and glossy rocky surrounds to two control displays, which matched either pixel histograms or a phase-scrambled power spectrum. Lightness constancy was improved for rocky glossy displays over the histogram-matched displays, but not compared to phase-scrambled variants of these images with equated power spectrums. The results were similar for surfaces rendered with 1, 2, 3 and 4 interreflections. These results suggest that lightness perception in complex centre-surround displays can be explained by the distribution of contrast across space and scale, independently of explicit information about surface shading or specularity. The results for surfaces without surface relief (“homogeneous” surfaces) differed qualitatively to rocky surfaces, exhibiting abrupt steps in perceived lightness at points at which the targets transitioned from being increments to decrements. Chapter 4 examined whether homogeneous displays evoke more complex mid-level representations similar to conditions of transparency. Matching target lightness in a homogeneous display to that in a textured or rocky display required varying both lightness and transmittance of the test patch on the textured display to obtain the most satisfactory matches. However, transmittance was only varied to match the contrast of targets against homogeneous surrounds, and not to explicitly match the amount of transparency perceived in the displays. The results suggest perceived target-surround edge contrast differs between homogeneous and textured displays. Varying the mid-level property of transparency in textured displays provides a natural means for equating both target lightness and the unique appearance of the edge contrast in homogeneous displays

    AirCode: Unobtrusive Physical Tags for Digital Fabrication

    Full text link
    We present AirCode, a technique that allows the user to tag physically fabricated objects with given information. An AirCode tag consists of a group of carefully designed air pockets placed beneath the object surface. These air pockets are easily produced during the fabrication process of the object, without any additional material or postprocessing. Meanwhile, the air pockets affect only the scattering light transport under the surface, and thus are hard to notice to our naked eyes. But, by using a computational imaging method, the tags become detectable. We present a tool that automates the design of air pockets for the user to encode information. AirCode system also allows the user to retrieve the information from captured images via a robust decoding algorithm. We demonstrate our tagging technique with applications for metadata embedding, robotic grasping, as well as conveying object affordances.Comment: ACM UIST 2017 Technical Paper
    • …
    corecore