273 research outputs found

    Influence of Chroma Subsampling on Objective Video Quality Assessment for High Resolutions

    Get PDF
    This paper deals with the influence of chroma subsampling on video quality measured by objective metrics for H.264/AVC and H.265/HEVC compression standards. The evaluation is done for eight types of sequences with full HD and ultra HD resolutions depending on content. The experimental results showed that there is no impact of chroma subsampling on the video. According to the results, it can also be said that H.265/HEVC codec yields better compression efficiency than H.264/AVC and the different is more visible in UHD resolution. The bigger difference in quality is in lower bitrates, with increasing bitrate the quality of H.264/AVC codec approaches the H.265/HEVC codec

    Effect of Color Space on High Dynamic Range Video Compression Performance

    Get PDF
    High dynamic range (HDR) technology allows for capturing and delivering a greater range of luminance levels compared to traditional video using standard dynamic range (SDR). At the same time, it has brought multiple challenges in content distribution, one of them being video compression. While there has been a significant amount of work conducted on this topic, there are some aspects that could still benefit this area. One such aspect is the choice of color space used for coding. In this paper, we evaluate through a subjective study how the performance of HDR video compression is affected by three color spaces: the commonly used Y'CbCr, and the recently introduced ITP (ICtCp) and Ypu'v'. Five video sequences are compressed at four bit rates, selected in a preliminary study, and their quality is assessed using pairwise comparisons. The results of pairwise comparisons are further analyzed and scaled to obtain quality scores. We found no evidence of ITP improving compression performance over Y'CbCr. We also found that Ypu'v' results in a moderately lower performance for some sequences

    Challenges and solutions in H.265/HEVC for integrating consumer electronics in professional video systems

    Get PDF

    Object Enhancement, Noise Reduction, Conversion and Collection of Spatiotemporal Image Data

    Get PDF
    In this report, a variety of cellular dynamics are enhanced and analyzed utilizing various algorithms and filter for contrast enhancement. This report will also illustrate the underlying complexities of processing compressed data received from certain type of sensors, their default applications, various methods in converting compressed data to compatible universal uncompressed formats allowed in scientific applications, various methods of image and video capture, guidelines in ethical image manipulation, various methods of frame extraction, and analyzing/processing video images. These methods and processes purposely utilize freeware and public domain software to lower the cost of reproducibility for all

    Visually lossless coding in HEVC : a high bit depth and 4:4:4 capable JND-based perceptual quantisation technique for HEVC

    Get PDF
    Due to the increasing prevalence of high bit depth and YCbCr 4:4:4 video data, it is desirable to develop a JND-based visually lossless coding technique which can account for high bit depth 4:4:4 data in addition to standard 8-bit precision chroma subsampled data. In this paper, we propose a Coding Block (CB)-level JND-based luma and chroma perceptual quantisation technique for HEVC named Pixel-PAQ. Pixel-PAQ exploits both luminance masking and chrominance masking to achieve JND-based visually lossless coding; the proposed method is compatible with high bit depth YCbCr 4:4:4 video data of any resolution. When applied to YCbCr 4:4:4 high bit depth video data, Pixel-PAQ can achieve vast bitrate reductions – of up to 75% (68.6% over four QP data points) – compared with a state-of-the-art luma-based JND method for HEVC named IDSQ. Moreover, the participants in the subjective evaluations confirm that visually lossless coding is successfully achieved by Pixel-PAQ (at a PSNR value of 28.04 dB in one test)

    Deep Learning Methods for Streaming Image Reconstruction in Fixed-camera Settings

    Get PDF
    A streaming video reconstruction system is described and implemented as a convolutional neural network. The system performs combined 2x super-resolution and H.264 artefacts removal with a processing speed of about 6 frames per second at 1920×1080 output resolution on current workstation-grade hardware. In 4x super-resolution mode, the system can output 3840×2160 video at a similar rate. The base system provides quality improvements of 0.010–0.025 SSIM over Lanczos filtering. Scene-specific training, in which the system automatically adapts to the current scene viewed by the camera, is shown to achieve up to 0.030 SSIM additional improvement in some scenarios. It is further shown that scene-specific training can provide some improvement even when reconstructing an unfamiliar scene, as long as the camera and capture settings remain the same.Många kameror sitter fast monterade och filmar samma plats varje dag. Tänk om kameror kunde tränas till att minnas vad de sett
    • …
    corecore