21,137 research outputs found
Improved quality block-based low bit rate video coding.
The aim of this research is to develop algorithms for enhancing the subjective quality and coding efficiency of standard block-based video coders. In the past few years, numerous video coding standards based on motion-compensated block-transform structure have been established where block-based motion estimation is used for reducing the correlation between consecutive images and block transform is used for coding the resulting motion-compensated residual images. Due to the use of predictive differential coding and variable length coding techniques, the output data rate exhibits extreme fluctuations. A rate control algorithm is devised for achieving a stable output data rate. This rate control algorithm, which is essentially a bit-rate estimation algorithm, is then employed in a bit-allocation algorithm for improving the visual quality of the coded images, based on some prior knowledge of the images. Block-based hybrid coders achieve high compression ratio mainly due to the employment of a motion estimation and compensation stage in the coding process. The conventional bit-allocation strategy for these coders simply assigns the bits required by the motion vectors and the rest to the residual image. However, at very low bit-rates, this bit-allocation strategy is inadequate as the motion vector bits takes up a considerable portion of the total bit-rate. A rate-constrained selection algorithm is presented where an analysis-by-synthesis approach is used for choosing the best motion vectors in term of resulting bit rate and image quality. This selection algorithm is then implemented for mode selection. A simple algorithm based on the above-mentioned bit-rate estimation algorithm is developed for the latter to reduce the computational complexity. For very low bit-rate applications, it is well-known that block-based coders suffer from blocking artifacts. A coding mode is presented for reducing these annoying artifacts by coding a down-sampled version of the residual image with a smaller quantisation step size. Its applications for adaptive source/channel coding and for coding fast changing sequences are examined
A novel method for subjective picture quality assessment and further studies of HDTV formats
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ IEEE 2008.This paper proposes a novel method for the assessment of picture quality, called triple stimulus continuous evaluation scale (TSCES), to allow the direct comparison of different HDTV formats. The method uses an upper picture quality anchor and a lower picture quality anchor with defined impairments. The HDTV format under test is evaluated in a subjective comparison with the upper and lower anchors. The method utilizes three displays in a particular vertical arrangement. In an initial series of tests with the novel method, the HDTV formats 1080p/50,1080i/25, and 720p/50 were compared at various bit-rates and with seven different content types on three identical 1920 times 1080 pixel displays. It was found that the new method provided stable and consistent results. The method was tested with 1080p/50,1080i/25, and 720p/50 HDTV images that had been coded with H.264/AVC High profile. The result of the assessment was that the progressive HDTV formats found higher appreciation by the assessors than the interlaced HDTV format. A system chain proposal is given for future media production and delivery to take advantage of this outcome. Recommendations for future research conclude the paper
A two-stage video coding framework with both self-adaptive redundant dictionary and adaptively orthonormalized DCT basis
In this work, we propose a two-stage video coding framework, as an extension
of our previous one-stage framework in [1]. The two-stage frameworks consists
two different dictionaries. Specifically, the first stage directly finds the
sparse representation of a block with a self-adaptive dictionary consisting of
all possible inter-prediction candidates by solving an L0-norm minimization
problem using an improved orthogonal matching pursuit with embedded
orthonormalization (eOMP) algorithm, and the second stage codes the residual
using DCT dictionary adaptively orthonormalized to the subspace spanned by the
first stage atoms. The transition of the first stage and the second stage is
determined based on both stages' quantization stepsizes and a threshold. We
further propose a complete context adaptive entropy coder to efficiently code
the locations and the coefficients of chosen first stage atoms. Simulation
results show that the proposed coder significantly improves the RD performance
over our previous one-stage coder. More importantly, the two-stage coder, using
a fixed block size and inter-prediction only, outperforms the H.264 coder
(x264) and is competitive with the HEVC reference coder (HM) over a large rate
range
On the Effectiveness of Video Recolouring as an Uplink-model Video Coding Technique
For decades, conventional video compression formats have advanced via incremental improvements with
each subsequent standard achieving better rate-distortion (RD) efficiency at the cost of increased encoder
complexity compared to its predecessors. Design efforts have been driven by common multi-media use cases
such as video-on-demand, teleconferencing, and video streaming, where the most important requirements are
low bandwidth and low video playback latency. Meeting these requirements involves the use of computa-
tionally expensive block-matching algorithms which produce excellent compression rates and quick decoding
times.
However, emerging use cases such as Wireless Video Sensor Networks, remote surveillance, and mobile
video present new technical challenges in video compression. In these scenarios, the video capture and
encoding devices are often power-constrained and have limited computational resources available, while the
decoder devices have abundant resources and access to a dedicated power source. To address these use cases,
codecs must be power-aware and offer a reasonable trade-off between video quality, bitrate, and encoder
complexity. Balancing these constraints requires a complete rethinking of video compression technology.
The uplink video-coding model represents a new paradigm to address these low-power use cases, providing
the ability to redistribute computational complexity by offloading the motion estimation and compensation
steps from encoder to decoder. Distributed Video Coding (DVC) follows this uplink model of video codec
design, and maintains high quality video reconstruction through innovative channel coding techniques. The
field of DVC is still early in its development, with many open problems waiting to be solved, and no defined
video compression or distribution standards. Due to the experimental nature of the field, most DVC codec
to date have focused on encoding and decoding the Luma plane only, which produce grayscale reconstructed
videos.
In this thesis, a technique called āvideo recolouringā is examined as an alternative to DVC. Video recolour-
ing exploits the temporal redundancies between colour planes, reducing video bitrate by removing Chroma
information from specific frames and then recolouring them at the decoder.
A novel video recolouring algorithm called Motion-Compensated Recolouring (MCR) is proposed, which
uses block motion estimation and bi-directional weighted motion-compensation to reconstruct Chroma planes
at the decoder. MCR is used to enhance a conventional base-layer codec, and shown to reduce bitrate by
up to 16% with only a slight decrease in objective quality. MCR also outperforms other video recolouring
algorithms in terms of objective video quality, demonstrating up to 2 dB PSNR improvement in some cases
Complexity adaptation in video encoders for power limited platforms
With the emergence of video services on power limited platforms, it is necessary to consider both performance-centric and constraint-centric signal processing techniques. Traditionally, video applications have a bandwidth or computational resources constraint or both. The recent H.264/AVC video compression standard offers significantly improved efficiency and flexibility compared to previous standards, which leads to less emphasis on bandwidth. However, its high computational complexity is a problem for codecs running on power limited plat- forms. Therefore, a technique that integrates both complexity and bandwidth issues in a single framework should be considered.
In this thesis we investigate complexity adaptation of a video coder which focuses on managing computational complexity and provides significant complexity savings when applied to recent standards. It consists of three sub functions specially designed for reducing complexity and a framework for using these sub functions; Variable Block Size (VBS) partitioning, fast motion estimation, skip macroblock detection, and complexity adaptation framework.
Firstly, the VBS partitioning algorithm based on the Walsh Hadamard Transform (WHT) is presented. The key idea is to segment regions of an image as edges or flat regions based on the fact that prediction errors are mainly affected by edges. Secondly, a fast motion estimation algorithm called Fast Walsh Boundary Search (FWBS) is presented on the VBS partitioned images. Its results outperform other commonly used fast algorithms. Thirdly, a skip macroblock detection algorithm is proposed for use prior to motion estimation by estimating the Discrete Cosine Transform (DCT) coefficients after quantisation. A new orthogonal transform called the S-transform is presented for predicting Integer DCT coefficients from Walsh Hadamard Transform coefficients. Complexity saving is achieved by deciding which macroblocks need to be processed and which can be skipped without processing. Simulation results show that the proposed algorithm achieves significant complexity savings with a negligible loss in rate-distortion performance. Finally, a complexity adaptation framework which combines all three techniques mentioned above is proposed for maximizing the perceptual quality of coded video on a complexity constrained platform
Recommended from our members
Research and developments of Dirac video codec
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.In digital video compression, apart from storage, successful transmission of the compressed video
data over the bandwidth limited erroneous channels is another important issue. To enable a video
codec for broadcasting application, it is required to implement the corresponding coding tools (e.g.
error-resilient coding, rate control etc.). They are normally non-normative parts of a video codec and
hence their specifications are not defined in the standard. In Dirac as well, the original codec is
optimized for storage purpose only and so, several non-normative part of the encoding tools are still
required in order to be able to use in other types of application.
Being the "Research and Developments of the Dirac Video Codec" as the research title, phase I of
the project is mainly focused on the error-resilient transmission over a noisy channel. The error-resilient
coding method used here is a simple and low complex coding scheme which provides the
error-resilient transmission of the compressed video bitstream of Dirac video encoder over the packet
erasure wired network. The scheme combines source and channel coding approach where error-resilient
source coding is achieved by data partitioning in the wavelet transformed domain and
channel coding is achieved through the application of either Rate-Compatible Punctured
Convolutional (RCPC) Code or Turbo Code (TC) using un-equal error protection between header plus
MV and data. The scheme is designed mainly for the packet-erasure channel, i.e. targeted for the
Internet broadcasting application.
But, for a bandwidth limited channel, it is still required to limit the amount of bits generated from
the encoder depending on the available bandwidth in addition to the error-resilient coding. So, in the
2nd phase of the project, a rate control algorithm is presented. The algorithm is based upon the Quality
Factor (QF) optimization method where QF of the encoded video is adaptively changing in order to
achieve average bitrate which is constant over each Group of Picture (GOP). A relation between the
bitrate, R and the QF, which is called Rate-QF (R-QF) model is derived in order to estimate the
optimum QF of the current encoding frame for a given target bitrate, R.
In some applications like video conferencing, real-time encoding and decoding with minimum
delay is crucial, but, the ability to do real-time encoding/decoding is largely determined by the
complexity of the encoder/decoder. As we all know that motion estimation process inside the encoder
is the most time consuming stage. So, reducing the complexity of the motion estimation stage will
certainly give one step closer to the real-time application. So, as a partial contribution toward realtime
application, in the final phase of the research, a fast Motion Estimation (ME) strategy is designed
and implemented. It is the combination of modified adaptive search plus semi-hierarchical way of
motion estimation. The same strategy was implemented in both Dirac and H.264 in order to
investigate its performance on different codecs. Together with this fast ME strategy, a method which
is called partial cost function calculation in order to further reduce down the computational load of the
cost function calculation was presented. The calculation is based upon the pre-defined set of patterns
which were chosen in such a way that they have as much maximum coverage as possible over the
whole block.
In summary, this research work has contributed to the error-resilient transmission of compressed
bitstreams of Dirac video encoder over a bandwidth limited error prone channel. In addition to this,
the final phase of the research has partially contributed toward the real-time application of the Dirac
video codec by implementing a fast motion estimation strategy together with partial cost function
calculation idea.BBC R&D and Brunel University
- ā¦