2,115 research outputs found

    Testing HDR image rendering algorithms

    Get PDF
    Eight high-dynamic-range image rendering algorithms were tested using ten high-dynamic-range pictorial images. A large-scale paired comparison psychophysical experiment was developed containing two sections, comparing the overall rendering performances and grayscale tone mapping performance respectively. An interval scale of preference was created to evaluate the rendering results. The results showed the consistency of tone-mapping performance with the overall rendering results, and illustrated that Durand and Dorsey’s bilateral fast filtering technique and Reinhard’s photographic tone reproduction have the best rendering performance overall. The goal of this experiment was to establish a sound testing and evaluation methodology based on psychophysical experiment results for future research on accuracy of rendering algorithms

    Self-Reference Deep Adaptive Curve Estimation for Low-Light Image Enhancement

    Full text link
    In this paper, we propose a 2-stage low-light image enhancement method called Self-Reference Deep Adaptive Curve Estimation (Self-DACE). In the first stage, we present an intuitive, lightweight, fast, and unsupervised luminance enhancement algorithm. The algorithm is based on a novel low-light enhancement curve that can be used to locally boost image brightness. We also propose a new loss function with a simplified physical model designed to preserve natural images' color, structure, and fidelity. We use a vanilla CNN to map each pixel through deep Adaptive Adjustment Curves (AAC) while preserving the local image structure. Secondly, we introduce the corresponding denoising scheme to remove the latent noise in the darkness. We approximately model the noise in the dark and deploy a Denoising-Net to estimate and remove the noise after the first stage. Exhaustive qualitative and quantitative analysis shows that our method outperforms existing state-of-the-art algorithms on multiple real-world datasets

    High-dynamic-range Foveated Near-eye Display System

    Get PDF
    Wearable near-eye display has found widespread applications in education, gaming, entertainment, engineering, military training, and healthcare, just to name a few. However, the visual experience provided by current near-eye displays still falls short to what we can perceive in the real world. Three major challenges remain to be overcome: 1) limited dynamic range in display brightness and contrast, 2) inadequate angular resolution, and 3) vergence-accommodation conflict (VAC) issue. This dissertation is devoted to addressing these three critical issues from both display panel development and optical system design viewpoints. A high-dynamic-range (HDR) display requires both high peak brightness and excellent dark state. In the second and third chapters, two mainstream display technologies, namely liquid crystal display (LCD) and organic light emitting diode (OLED), are investigated to extend their dynamic range. On one hand, LCD can easily boost its peak brightness to over 1000 nits, but it is challenging to lower the dark state to \u3c 0.01 nits. To achieve HDR, we propose to use a mini-LED local dimming backlight. Based on our simulations and subjective experiments, we establish practical guidelines to correlate the device contrast ratio, viewing distance, and required local dimming zone number. On the other hand, self-emissive OLED display exhibits a true dark state, but boosting its peak brightness would unavoidably cause compromised lifetime. We propose a systematic approach to enhance OLED\u27s optical efficiency while keeping indistinguishable angular color shift. These findings will shed new light to guide future HDR display designs. In Chapter four, in order to improve angular resolution, we demonstrate a multi-resolution foveated display system with two display panels and an optical combiner. The first display panel provides wide field of view for peripheral vision, while the second panel offers ultra-high resolution for the central fovea. By an optical minifying system, both 4x and 5x enhanced resolutions are demonstrated. In addition, a Pancharatnam-Berry phase deflector is applied to actively shift the high-resolution region, in order to enable eye-tracking function. The proposed design effectively reduces the pixelation and screen-door effect in near-eye displays. The VAC issue in stereoscopic displays is believed to be the main cause of visual discomfort and fatigue when wearing VR headsets. In Chapter five, we propose a novel polarization-multiplexing approach to achieve multiplane display. A polarization-sensitive Pancharatnam-Berry phase lens and a spatial polarization modulator are employed to simultaneously create two independent focal planes. This method enables generation of two image planes without the need of temporal multiplexing. Therefore, it can effectively reduce the frame rate by one-half. In Chapter six, we briefly summarize our major accomplishments

    Deep Bilateral Learning for Real-Time Image Enhancement

    Get PDF
    Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201
    • …
    corecore