1,413 research outputs found

    Effect of Color Space on High Dynamic Range Video Compression Performance

    Get PDF
    High dynamic range (HDR) technology allows for capturing and delivering a greater range of luminance levels compared to traditional video using standard dynamic range (SDR). At the same time, it has brought multiple challenges in content distribution, one of them being video compression. While there has been a significant amount of work conducted on this topic, there are some aspects that could still benefit this area. One such aspect is the choice of color space used for coding. In this paper, we evaluate through a subjective study how the performance of HDR video compression is affected by three color spaces: the commonly used Y'CbCr, and the recently introduced ITP (ICtCp) and Ypu'v'. Five video sequences are compressed at four bit rates, selected in a preliminary study, and their quality is assessed using pairwise comparisons. The results of pairwise comparisons are further analyzed and scaled to obtain quality scores. We found no evidence of ITP improving compression performance over Y'CbCr. We also found that Ypu'v' results in a moderately lower performance for some sequences

    Uniform Color Space-Based High Dynamic Range Video Compression

    Get PDF
    © 1991-2012 IEEE. Recently, there has been a significant progress in the research and development of the high dynamic range (HDR) video technology and the state-of-the-art video pipelines are able to offer a higher bit depth support to capture, store, encode, and display HDR video content. In this paper, we introduce a novel HDR video compression algorithm, which uses a perceptually uniform color opponent space, a novel perceptual transfer function to encode the dynamic range of the scene, and a novel error minimization scheme for accurate chroma reproduction. The proposed algorithm was objectively and subjectively evaluated against four state-of-the-art algorithms. The objective evaluation was conducted across a set of 39 HDR video sequences, using the latest x265 10-bit video codec along with several perceptual and structural quality assessment metrics at 11 different quality levels. Furthermore, a rating-based subjective evaluation ( n=40n=40 ) was conducted with six sequences at two different output bitrates. Results suggest that the proposed algorithm exhibits the lowest coding error amongst the five algorithms evaluated. Additionally, the rate-distortion characteristics suggest that the proposed algorithm outperforms the existing state-of-the-art at bitrates ≥ 0.4 bits/pixel

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video

    Novel VLSI Architecture for Quantization and Variable Length Coding for H-264/AVC Video Compression Standard

    Get PDF
    Integrated multimedia systems process text, graphics, and other discrete media such as digital audio and video streams. In an uncompressed state, graphics, audio and video data, especially moving pictures, require large transmission and storage capacities which can be very expensive. Hence video compression has become a key component of any multimedia system or application. The ITU (International Telecommunications Union) and MPEG (Moving Picture Experts Group) have combined efforts to put together the next generation of video compression standard, the H.264/MPEG-4 PartlO/AVC, which was finalized in 2003. The H.264/AVC uses significantly improved and computationally intensive compression techniques to maximize performance. H.264/AVC compliant encoders achieve the same reproduction quality as encoders that are compliant with the previous standards while requiring 60% or less of the bit rate [2]. This thesis aims at designing two basic blocks of an ASIC capable of performing the H.264 video compression. These two blocks, the Quantizer, and Entropy Encoder implement the Baseline Profile of the H.264/AVC standard. The architecture is implemented in Register Transfer Level HDL and synthesized with Synopsys Design Compiler using TSMC 0.25(xm technology, giving us an estimate of the hardware requirements in real-time implementation. The quantizer block is capable of running at 309MHz and has a total area of 785K gates with a power requirement of 88.59mW. The entropy encoder unit is capable of running at 250 MHz and has a total area of 49K gates with a power requirement of 2.68mW. The high speed that is achieved in this thesis simply indicates that the two blocks Quantizer and Entropy Encoder can be used as IP embedded in the HDTV systems

    Gamut extension algorithm development and evaluation for the mapping of standard image content to wide-gamut displays

    Get PDF
    Wide-gamut display technology has provided an excellent opportunity to produce visually pleasing images, more so than in the past. However, through several studies, including Laird and Heynderick, 2008, it was shown that linearly mapping the standard sRGB content to the gamut boundary of a given wide-gamut display may not result in optimal results. Therefore, several algorithms were developed and evaluated for observer preference, including both linear and sigmoidal expansion algorithms, in an effort to define a single, versatile gamut expansion algorithm (GEA) that can be applied to current display technology and produce the most preferable images for observers. The outcome provided preference results from two displays, both of which resulted in large scene dependencies. However, the sigmoidal GEAs (SGEA) were competitive with the linear GEAs (LGEA), and in many cases, resulted in more pleasing reproductions. The SGEAs provide an excellent baseline, in which, with minor improvements, could be key to producing more impressive images on a wide-gamut display
    corecore