533 research outputs found

    Adaptive Quantisation in HEVC for Contouring Artefacts Removal in UHD Content

    Get PDF
    Contouring artefacts affect the visual experience of some particular types of compressed Ultra High Definition (UHD) sequences characterised by smoothly textured areas and gradual transitions in the value of the pixels. This paper proposes a technique to adjust the quantisation process at the encoder so that contouring artefacts are avoided. The devised method does not require any change at the decoder side and introduces a negligible coding rate increment (up to 3.4% for the same objective quality). This result compares favourably with the average 11.2% bit-rate penalty introduced by a method where the quantisation step is reduced in contour-prone areas

    HDR-ChipQA: No-Reference Quality Assessment on High Dynamic Range Videos

    Full text link
    We present a no-reference video quality model and algorithm that delivers standout performance for High Dynamic Range (HDR) videos, which we call HDR-ChipQA. HDR videos represent wider ranges of luminances, details, and colors than Standard Dynamic Range (SDR) videos. The growing adoption of HDR in massively scaled video networks has driven the need for video quality assessment (VQA) algorithms that better account for distortions on HDR content. In particular, standard VQA models may fail to capture conspicuous distortions at the extreme ends of the dynamic range, because the features that drive them may be dominated by distortions {that pervade the mid-ranges of the signal}. We introduce a new approach whereby a local expansive nonlinearity emphasizes distortions occurring at the higher and lower ends of the {local} luma range, allowing for the definition of additional quality-aware features that are computed along a separate path. These features are not HDR-specific, and also improve VQA on SDR video contents, albeit to a reduced degree. We show that this preprocessing step significantly boosts the power of distortion-sensitive natural video statistics (NVS) features when used to predict the quality of HDR content. In similar manner, we separately compute novel wide-gamut color features using the same nonlinear processing steps. We have found that our model significantly outperforms SDR VQA algorithms on the only publicly available, comprehensive HDR database, while also attaining state-of-the-art performance on SDR content

    High dynamic range video compression exploiting luminance masking

    Get PDF

    Adaptive Display Intensity Control Using Digital Signal Processor

    Get PDF
    One of the major cause of eye strain and other problems caused while watching video displays is the relative illumination between screen and its surrounding. This can be overcome by adjusting the brightness of screen with respect to surrounding light. The display systems with the human eye features like automatic intensity control under varying background luminance conditions add more challenge to design of display systems. The Adaptive Intensity Control can be achieved by varying the display intensity according to the background intensity level taking into account the comfort level of the user. In this paper, various parameters important for automatic intensity control design have been discussed and a new methodology based on look up table generated using experimental values has been devised by which the display intensity can be adaptively varied maintaining an adequate contrast ratio in real time mode. In this paper, Signal Processor based adaptive display intensity control of display intensity has been proposed

    Redistributing the Precision and Content in 3D-LUT-based Inverse Tone-mapping for HDR/WCG Display

    Full text link
    ITM(inverse tone-mapping) converts SDR (standard dynamic range) footage to HDR/WCG (high dynamic range /wide color gamut) for media production. It happens not only when remastering legacy SDR footage in front-end content provider, but also adapting on-theair SDR service on user-end HDR display. The latter requires more efficiency, thus the pre-calculated LUT (look-up table) has become a popular solution. Yet, conventional fixed LUT lacks adaptability, so we learn from research community and combine it with AI. Meanwhile, higher-bit-depth HDR/WCG requires larger LUT than SDR, so we consult traditional ITM for an efficiency-performance trade-off: We use 3 smaller LUTs, each has a non-uniform packing (precision) respectively denser in dark, middle and bright luma range. In this case, their results will have less error only in their own range, so we use a contribution map to combine their best parts to final result. With the guidance of this map, the elements (content) of 3 LUTs will also be redistributed during training. We conduct ablation studies to verify method's effectiveness, and subjective and objective experiments to show its practicability. Code is available at: https://github.com/AndreGuo/ITMLUT.Comment: Accepted in CVMP2023 (the 20th ACM SIGGRAPH European Conference on Visual Media Production

    Quality Assessment of Resultant Images after Processing

    Get PDF
    Image quality is a characteristic of an image that measures the perceived image degradation, typically, compared to an ideal or perfect image. Imaging systems may introduce some amounts of distortion or artifacts in the signal, so the quality assessment is an important problem.  Processing of images involves complicated steps. The aim of any processing result is to get a processed image which is very much same as the original. It includes image restoration, enhancement, compression and many more. To find if the reconstructed image after compression has lost the originality is found by assessing the quality of the image. Traditional perceptual image quality assessment approaches are based on measuring the errors (signal differences between the distorted and the reference images and attempt to quantify the errors in a way that simulates human visual error sensitivity features. A discussion is proposed here in order to assess the quality of the compressed image and the relevant information of the processed image is found. Keywords: Reference methods, Quality Assessment, Lateral chromatic aberration, Root Mean Squared Error, Peak Signal to Noise Ratio, Signal to Noise Ratio, Human Visual System

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video
    corecore