753 research outputs found

    Appearance-based image splitting for HDR display systems

    Get PDF
    High dynamic range displays that incorporate two optically-coupled image planes have recently been developed. This dual image plane design requires that a given HDR input image be split into two complementary standard dynamic range components that drive the coupled systems, therefore there existing image splitting issue. In this research, two types of HDR display systems (hardcopy and softcopy HDR display) are constructed to facilitate the study of HDR image splitting algorithm for building HDR displays. A new HDR image splitting algorithm which incorporates iCAM06 image appearance model is proposed, seeking to create displayed HDR images that can provide better image quality. The new algorithm has potential to improve image details perception, colorfulness and better gamut utilization. Finally, the performance of the new iCAM06-based HDR image splitting algorithm is evaluated and compared with widely spread luminance square root algorithm through psychophysical studies

    Subjective and objective evaluation of local dimming algorithms for HDR images

    Get PDF

    High dynamic range display systems

    Get PDF
    High contrast ratio (CR) enables a display system to faithfully reproduce the real objects. However, achieving high contrast, especially high ambient contrast (ACR), is a challenging task. In this dissertation, two display systems with high CR are discussed: high ACR augmented reality (AR) display and high dynamic range (HDR) display. For an AR display, we improved its ACR by incorporating a tunable transmittance liquid crystal (LC) film. The film has high tunable transmittance range, fast response time, and is fail-safe. To reduce the weight and size of a display system, we proposed a functional reflective polarizer, which can also help people with color vision deficiency. As for the HDR display, we improved all three aspects of the hardware requirements: contrast ratio, color gamut and bit-depth. By stacking two liquid crystal display (LCD) panels together, we have achieved CR over one million to one, 14-bit depth with 5V operation voltage, and pixel-by-pixel local dimming. To widen color gamut, both photoluminescent and electroluminescent quantum dots (QDs) have been investigated. Our analysis shows that with QD approach, it is possible to achieve over 90% of the Rec. 2020 color gamut for a HDR display. Another goal of an HDR display is to achieve the 12-bit perceptual quantizer (PQ) curve covering from 0 to 10,000 nits. Our experimental results indicate that this is difficult with a single LCD panel because of the sluggish response time. To overcome this challenge, we proposed a method to drive the light emitting diode (LED) backlight and the LCD panel simultaneously. Besides relatively fast response time, this approach can also mitigate the imaging noise. Finally yet importantly, we improved the display pipeline by using a HDR gamut mapping approach to display HDR contents adaptively based on display specifications. A psychophysical experiment was conducted to determine the display requirements

    Event-Based Motion Segmentation by Motion Compensation

    Full text link
    In contrast to traditional cameras, whose pixels have a common exposure time, event-based cameras are novel bio-inspired sensors whose pixels work independently and asynchronously output intensity changes (called "events"), with microsecond resolution. Since events are caused by the apparent motion of objects, event-based cameras sample visual information based on the scene dynamics and are, therefore, a more natural fit than traditional cameras to acquire motion, especially at high speeds, where traditional cameras suffer from motion blur. However, distinguishing between events caused by different moving objects and by the camera's ego-motion is a challenging task. We present the first per-event segmentation method for splitting a scene into independently moving objects. Our method jointly estimates the event-object associations (i.e., segmentation) and the motion parameters of the objects (or the background) by maximization of an objective function, which builds upon recent results on event-based motion-compensation. We provide a thorough evaluation of our method on a public dataset, outperforming the state-of-the-art by as much as 10%. We also show the first quantitative evaluation of a segmentation algorithm for event cameras, yielding around 90% accuracy at 4 pixels relative displacement.Comment: When viewed in Acrobat Reader, several of the figures animate. Video: https://youtu.be/0q6ap_OSBA

    High dynamic range video merging, tone mapping, and real-time implementation

    Get PDF
    Although High Dynamic Range (High Dynamic Range (HDR)) imaging has been the subject of significant research over the past fifteen years, the goal of cinemaquality HDR video has not yet been achieved. This work references an optical method patented by Contrast Optical which is used to capture sequences of Low Dynamic Range (LDR) images that can be used to form HDR images as the basis for HDR video. Because of the large diverence in exposure spacing of the LDR images captured by this camera, present methods of merging LDR images are insufficient to produce cinema quality HDR images and video without significant visible artifacts. Thus the focus of the research presented is two fold. The first contribution is a new method of combining LDR images with exposure differences of greater than 3 stops into an HDR image. The second contribution is a method of tone mapping HDR video which solves potential problems of HDR video flicker and automated parameter control of the tone mapping operator. A prototype of this HDR video capture technique along with the combining and tone mapping algorithms have been implemented in a high-definition HDR-video system. Additionally, Field Programmable Gate Array (FPGA) hardware implementation details are given to support real time HDR video. Still frames from the acquired HDR video system which have been merged used the merging and tone mapping techniques will be presented

    Distilling Style from Image Pairs for Global Forward and Inverse Tone Mapping

    Full text link
    Many image enhancement or editing operations, such as forward and inverse tone mapping or color grading, do not have a unique solution, but instead a range of solutions, each representing a different style. Despite this, existing learning-based methods attempt to learn a unique mapping, disregarding this style. In this work, we show that information about the style can be distilled from collections of image pairs and encoded into a 2- or 3-dimensional vector. This gives us not only an efficient representation but also an interpretable latent space for editing the image style. We represent the global color mapping between a pair of images as a custom normalizing flow, conditioned on a polynomial basis of the pixel color. We show that such a network is more effective than PCA or VAE at encoding image style in low-dimensional space and lets us obtain an accuracy close to 40 dB, which is about 7-10 dB improvement over the state-of-the-art methods.Comment: Published in European Conference on Visual Media Production (CVMP '22

    07171 Abstracts Collection -- Visual Computing -- Convergence of Computer Graphics and Computer Vision

    Get PDF
    From 22.04. to 27.04.2007, the Dagstuhl Seminar 07171 ``Visual Computing - Convergence of Computer Graphics and Computer Vision\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available
    • …
    corecore