302 research outputs found
Adaptive Quantization Matrices for HD and UHD Display Resolutions in Scalable HEVC
HEVC contains an option to enable custom quantization matrices, which are
designed based on the Human Visual System and a 2D Contrast Sensitivity
Function. Visual Display Units, capable of displaying video data at High
Definition and Ultra HD display resolutions, are frequently utilized on a
global scale. Video compression artifacts that are present due to high levels
of quantization, which are typically inconspicuous in low display resolution
environments, are clearly visible on HD and UHD video data and VDUs. The
default QM technique in HEVC does not take into account the video data
resolution, nor does it take into consideration the associated display
resolution of a VDU to determine the appropriate levels of quantization
required to reduce unwanted video compression artifacts. Based on this fact, we
propose a novel, adaptive quantization matrix technique for the HEVC standard,
including Scalable HEVC. Our technique, which is based on a refinement of the
current HVS-CSF QM approach in HEVC, takes into consideration the display
resolution of the target VDU for the purpose of minimizing video compression
artifacts. In SHVC SHM 9.0, and compared with anchors, the proposed technique
yields important quality and coding improvements for the Random Access
configuration, with a maximum of 56.5% luma BD-Rate reductions in the
enhancement layer. Furthermore, compared with the default QMs and the Sony QMs,
our method yields encoding time reductions of 0.75% and 1.19%, respectively.Comment: Data Compression Conference 201
Low complexity in-loop perceptual video coding
The tradition of broadcast video is today complemented with user generated content, as portable devices support video coding. Similarly, computing is becoming ubiquitous, where Internet of Things (IoT) incorporate heterogeneous networks to communicate with personal and/or infrastructure devices. Irrespective, the emphasises is on bandwidth and processor efficiencies, meaning increasing the signalling options in video encoding. Consequently, assessment for pixel differences applies uniform cost to be processor efficient, in contrast the Human Visual System (HVS) has non-uniform sensitivity based upon lighting, edges and textures. Existing perceptual assessments, are natively incompatible and processor demanding, making perceptual video coding (PVC) unsuitable for these environments. This research allows existing perceptual assessment at the native level using low complexity techniques, before producing new pixel-base image quality assessments (IQAs). To manage these IQAs a framework was developed and implemented in the high efficiency video coding (HEVC) encoder. This resulted in bit-redistribution, where greater bits and smaller partitioning were allocated to perceptually significant regions. Using a HEVC optimised processor the timing increase was < +4% and < +6% for video streaming and recording applications respectively, 1/3 of an existing low complexity PVC solution. Future work should be directed towards perceptual quantisation which offers the potential for perceptual coding gain
Analysis of the perceptual quality performance of different HEVC coding tools
Each new video encoding standard includes encoding techniques that aim to improve the performance and quality of the previous standards. During the development of these techniques, PSNR was used as the main distortion metric. However, the PSNR metric does not consider the subjectivity of the human visual system, so that the performance of some coding tools is questionable from the perceptual point of view. To further explore this point, we have developed a detailed study about the perceptual sensibility of different HEVC video coding tools. In order to perform this study, we used some popular objective quality assessment metrics to measure the perceptual response of every single coding tool. The conclusion of this work will help to determine the set of HEVC coding tools that provides, in general, the best perceptual response
PEA265: Perceptual Assessment of Video Compression Artifacts
The most widely used video encoders share a common hybrid coding framework
that includes block-based motion estimation/compensation and block-based
transform coding. Despite their high coding efficiency, the encoded videos
often exhibit visually annoying artifacts, denoted as Perceivable Encoding
Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience
(QoE) of end users. To monitor and improve visual QoE, it is crucial to develop
subjective and objective measures that can identify and quantify various types
of PEAs. In this work, we make the first attempt to build a large-scale
subjectlabelled database composed of H.265/HEVC compressed videos containing
various PEAs. The database, namely the PEA265 database, includes 4 types of
spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types
of temporal PEAs (i.e. flickering and floating). Each containing at least
60,000 image or video patches with positive and negative labels. To objectively
identify these PEAs, we train Convolutional Neural Networks (CNNs) using the
PEA265 database. It appears that state-of-theart ResNeXt is capable of
identifying each type of PEAs with high accuracy. Furthermore, we define PEA
pattern and PEA intensity measures to quantify PEA levels of compressed video
sequence. We believe that the PEA265 database and our findings will benefit the
future development of video quality assessment methods and perceptually
motivated video encoders.Comment: 10 pages,15 figures,4 table
Predictive Coding For Animation-Based Video Compression
We address the problem of efficiently compressing video for conferencing-type
applications. We build on recent approaches based on image animation, which can
achieve good reconstruction quality at very low bitrate by representing face
motions with a compact set of sparse keypoints. However, these methods encode
video in a frame-by-frame fashion, i.e. each frame is reconstructed from a
reference frame, which limits the reconstruction quality when the bandwidth is
larger. Instead, we propose a predictive coding scheme which uses image
animation as a predictor, and codes the residual with respect to the actual
target frame. The residuals can be in turn coded in a predictive manner, thus
removing efficiently temporal dependencies. Our experiments indicate a
significant bitrate gain, in excess of 70% compared to the HEVC video standard
and over 30% compared to VVC, on a datasetof talking-head videosComment: Accepted paper: ICIP 202
- …