1,189 research outputs found
Study of Compression Statistics and Prediction of Rate-Distortion Curves for Video Texture
Encoding textural content remains a challenge for current standardised video
codecs. It is therefore beneficial to understand video textures in terms of
both their spatio-temporal characteristics and their encoding statistics in
order to optimize encoding performance. In this paper, we analyse the
spatio-temporal features and statistics of video textures, explore the
rate-quality performance of different texture types and investigate models to
mathematically describe them. For all considered theoretical models, we employ
machine-learning regression to predict the rate-quality curves based solely on
selected spatio-temporal features extracted from uncompressed content. All
experiments were performed on homogeneous video textures to ensure validity of
the observations. The results of the regression indicate that using an
exponential model we can more accurately predict the expected rate-quality
curve (with a mean Bj{\o}ntegaard Delta rate of 0.46% over the considered
dataset) while maintaining a low relative complexity. This is expected to be
adopted by in the loop processes for faster encoding decisions such as
rate-distortion optimisation, adaptive quantization, partitioning, etc.Comment: 17 page
Texture Structure Analysis
abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established.Dissertation/ThesisPh.D. Electrical Engineering 201
Efficient Bitrate Ladder Construction for Content-Optimized Adaptive Video Streaming
One of the challenges faced by many video providers is the heterogeneity of
network specifications, user requirements, and content compression performance.
The universal solution of a fixed bitrate ladder is inadequate in ensuring a
high quality of user experience without re-buffering or introducing annoying
compression artifacts. However, a content-tailored solution, based on
extensively encoding across all resolutions and over a wide quality range is
highly expensive in terms of computational, financial, and energy costs.
Inspired by this, we propose an approach that exploits machine learning to
predict a content-optimized bitrate ladder. The method extracts spatio-temporal
features from the uncompressed content, trains machine-learning models to
predict the Pareto front parameters, and, based on that, builds the ladder
within a defined bitrate range. The method has the benefit of significantly
reducing the number of encodes required per sequence. The presented results,
based on 100 HEVC-encoded sequences, demonstrate a reduction in the number of
encodes required when compared to an exhaustive search and an
interpolation-based method, by 89.06% and 61.46%, respectively, at the cost of
an average Bj{\o}ntegaard Delta Rate difference of 1.78% compared to the
exhaustive approach. Finally, a hybrid method is introduced that selects either
the proposed or the interpolation-based method depending on the sequence
features. This results in an overall 83.83% reduction of required encodings at
the cost of an average Bj{\o}ntegaard Delta Rate difference of 1.26%
Bitrate Ladder Prediction Methods for Adaptive Video Streaming: A Review and Benchmark
HTTP adaptive streaming (HAS) has emerged as a widely adopted approach for
over-the-top (OTT) video streaming services, due to its ability to deliver a
seamless streaming experience. A key component of HAS is the bitrate ladder,
which provides the encoding parameters (e.g., bitrate-resolution pairs) to
encode the source video. The representations in the bitrate ladder allow the
client's player to dynamically adjust the quality of the video stream based on
network conditions by selecting the most appropriate representation from the
bitrate ladder. The most straightforward and lowest complexity approach
involves using a fixed bitrate ladder for all videos, consisting of
pre-determined bitrate-resolution pairs known as one-size-fits-all. Conversely,
the most reliable technique relies on intensively encoding all resolutions over
a wide range of bitrates to build the convex hull, thereby optimizing the
bitrate ladder for each specific video. Several techniques have been proposed
to predict content-based ladders without performing a costly exhaustive search
encoding. This paper provides a comprehensive review of various methods,
including both conventional and learning-based approaches. Furthermore, we
conduct a benchmark study focusing exclusively on various learning-based
approaches for predicting content-optimized bitrate ladders across multiple
codec settings. The considered methods are evaluated on our proposed
large-scale dataset, which includes 300 UHD video shots encoded with software
and hardware encoders using three state-of-the-art encoders, including
AVC/H.264, HEVC/H.265, and VVC/H.266, at various bitrate points. Our analysis
provides baseline methods and insights, which will be valuable for future
research in the field of bitrate ladder prediction. The source code of the
proposed benchmark and the dataset will be made publicly available upon
acceptance of the paper
VMAF-based Bitrate Ladder Estimation for Adaptive Streaming
In HTTP Adaptive Streaming, video content is conventionally encoded by
adapting its spatial resolution and quantization level to best match the
prevailing network state and display characteristics. It is well known that the
traditional solution, of using a fixed bitrate ladder, does not result in the
highest quality of experience for the user. Hence, in this paper, we consider a
content-driven approach for estimating the bitrate ladder, based on
spatio-temporal features extracted from the uncompressed content. The method
implements a content-driven interpolation. It uses the extracted features to
train a machine learning model to infer the curvature points of the Rate-VMAF
curves in order to guide a set of initial encodings. We employ the VMAF quality
metric as a means of perceptually conditioning the estimation. When compared to
exhaustive encoding that produces the reference ladder, the estimated ladder is
composed by 74.3% of identical Rate-VMAF points with the reference ladder. The
proposed method offers a significant reduction of the number of encodes
required, 77.4%, at a small average Bj{\o}ntegaard Delta Rate cost, 1.12%
Blind Quality Assessment for Image Superresolution Using Deep Two-Stream Convolutional Networks
Numerous image superresolution (SR) algorithms have been proposed for
reconstructing high-resolution (HR) images from input images with lower spatial
resolutions. However, effectively evaluating the perceptual quality of SR
images remains a challenging research problem. In this paper, we propose a
no-reference/blind deep neural network-based SR image quality assessor
(DeepSRQ). To learn more discriminative feature representations of various
distorted SR images, the proposed DeepSRQ is a two-stream convolutional network
including two subcomponents for distorted structure and texture SR images.
Different from traditional image distortions, the artifacts of SR images cause
both image structure and texture quality degradation. Therefore, we choose the
two-stream scheme that captures different properties of SR inputs instead of
directly learning features from one image stream. Considering the human visual
system (HVS) characteristics, the structure stream focuses on extracting
features in structural degradations, while the texture stream focuses on the
change in textural distributions. In addition, to augment the training data and
ensure the category balance, we propose a stride-based adaptive cropping
approach for further improvement. Experimental results on three publicly
available SR image quality databases demonstrate the effectiveness and
generalization ability of our proposed DeepSRQ method compared with
state-of-the-art image quality assessment algorithms
- …