182 research outputs found

    Motion blur reduction for high frame rate LCD-TVs

    Get PDF
    Today's LCD-TVs reduce their hold time to prevent motion blur. This is best implemented using frame rate up-conversion with motion compensated interpolation. The registration process of the TV-signal, by film or video camera, has been identified as a second motion blur source, which becomes dominant for TV displays with a frame rate of 100 Hz or higher. In order to justify any further hold time reduction of LCDs, this second type of motion blur, referred to as camera blur, needs to be addressed. This paper presents a real-time camera blur estimation and reduction method, suitable for TV-applications

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes

    Analysis of MVD and color edge detection for depth maps enhacement

    Get PDF
    Prjecte final de carrera realitzat en col.laboració amb Fraunhofer Heinrich Hertz InstituteMVD (Multiview Video plus Depth) data consists of two components: color video and depth maps sequences. Depth maps represent the spatial arrangement (or three dimensional geometry) of the scene. The MVD representation is used for rendering virtual views in FVV (Free Viewpoint Video) and for 3DTV (3-dimensional TeleVision) applications. Distortions of the silhouettes of objects in the depth maps are a problem when rendering a stereo video pair. This Master thesis presents a system to improve the depth component of MVD . For this purpose, it introduces a new method called correlation histograms for analyzing the two components of depth-enhanced 3D video representations with special emphasis on the improved depth component. This document gives a description of this new method and presents an analysis of six di erent MVD data sets with di erent features. Moreover, a modular and exible system for improving depth maps is introduced. The idea behind is to use the color video component for extracting edges of the scene and to re-shape the depth component according to the edge information. The mentioned system basically describes a framework. Hence, it is capable to admit changes on speci c tasks if the concrete target is respected. After the improvement process, the MVD data is analyzed again via correlation histograms in order to obtain characteristics of the depth improvement. The achieved results show that correlation histograms are a good method for analyzing the impact of processing MVD data. It is also con rmed that the presented system is modular and exible, as it works with three di erent degrees of change, introducing modi cations in depth maps, according to the input characteristics. Hence, this system can be used as a framework for depth map improvement. The results show that contours with 1-pixel width jittering in depth maps have been correctly re-shaped. Additionally, constant background and foreground areas of depth maps have also been improved according to the degree of change, attaining better results in terms of temporal consistency. However, future work can focus on unresolved problems, such as jittering with more than one pixel width or by making the system more dynamic

    Increasing temporal, structural, and spectral resolution in images using exemplar-based priors

    Get PDF
    In the past decade, camera manufacturers have offered smaller form factors, smaller pixel sizes (leading to higher resolution images), and faster processing chips to increase the performance of consumer cameras. However, these conventional approaches have failed to capitalize on the spatio-temporal redundancy inherent in images, nor have they adequately provided a solution for finding 33D point correspondences for cameras sampling different bands of the visible spectrum. In this thesis, we pose the following question---given the repetitious nature of image patches, and appropriate camera architectures, can statistical models be used to increase temporal, structural, or spectral resolution? While many techniques have been suggested to tackle individual aspects of this question, the proposed solutions either require prohibitively expensive hardware modifications and/or require overly simplistic assumptions about the geometry of the scene. We propose a two-stage solution to facilitate image reconstruction; 1) design a linear camera system that optically encodes scene information and 2) recover full scene information using prior models learned from statistics of natural images. By leveraging the tendency of small regions to repeat throughout an image or video, we are able to learn prior models from patches pulled from exemplar images. The quality of this approach will be demonstrated for two application domains, using low-speed video cameras for high-speed video acquisition and multi-spectral fusion using an array of cameras. We also investigate a conventional approach for finding 3D correspondence that enables a generalized assorted array of cameras to operate in multiple modalities, including multi-spectral, high dynamic range, and polarization imaging of dynamic scenes

    Three-dimensional media for mobile devices

    Get PDF
    Cataloged from PDF version of article.This paper aims at providing an overview of the core technologies enabling the delivery of 3-D Media to next-generation mobile devices. To succeed in the design of the corresponding system, a profound knowledge about the human visual system and the visual cues that form the perception of depth, combined with understanding of the user requirements for designing user experience for mobile 3-D media, are required. These aspects are addressed first and related with the critical parts of the generic system within a novel user-centered research framework. Next-generation mobile devices are characterized through their portable 3-D displays, as those are considered critical for enabling a genuine 3-D experience on mobiles. Quality of 3-D content is emphasized as the most important factor for the adoption of the new technology. Quality is characterized through the most typical, 3-D-specific visual artifacts on portable 3-D displays and through subjective tests addressing the acceptance and satisfaction of different 3-D video representation, coding, and transmission methods. An emphasis is put on 3-D video broadcast over digital video broadcasting-handheld (DVB-H) in order to illustrate the importance of the joint source-channel optimization of 3-D video for its efficient compression and robust transmission over error-prone channels. The comparative results obtained identify the best coding and transmission approaches and enlighten the interaction between video quality and depth perception along with the influence of the context of media use. Finally, the paper speculates on the role and place of 3-D multimedia mobile devices in the future internet continuum involving the users in cocreation and refining of rich 3-D media content

    Metrics for Stereoscopic Image Compression

    Get PDF
    Metrics for automatically predicting the compression settings for stereoscopic images, to minimize file size, while still maintaining an acceptable level of image quality are investigated. This research evaluates whether symmetric or asymmetric compression produces a better quality of stereoscopic image. Initially, how Peak Signal to Noise Ratio (PSNR) measures the quality of varyingly compressed stereoscopic image pairs was investigated. Two trials with human subjects, following the ITU-R BT.500-11 Double Stimulus Continuous Quality Scale (DSCQS) were undertaken to measure the quality of symmetric and asymmetric stereoscopic image compression. Computational models of the Human Visual System (HVS) were then investigated and a new stereoscopic image quality metric designed and implemented. The metric point matches regions of high spatial frequency between the left and right views of the stereo pair and accounts for HVS sensitivity to contrast and luminance changes in these regions. The PSNR results show that symmetric, as opposed to asymmetric stereo image compression, produces significantly better results. The human factors trial suggested that in general, symmetric compression of stereoscopic images should be used. The new metric, Stereo Band Limited Contrast, has been demonstrated as a better predictor of human image quality preference than PSNR and can be used to predict a perceptual threshold level for stereoscopic image compression. The threshold is the maximum compression that can be applied without the perceived image quality being altered. Overall, it is concluded that, symmetric, as opposed to asymmetric stereo image encoding, should be used for stereoscopic image compression. As PSNR measures of image quality are correctly criticized for correlating poorly with perceived visual quality, the new HVS based metric was developed. This metric produces a useful threshold to provide a practical starting point to decide the level of compression to use
    corecore