9 research outputs found
A Simple and High Performing Rate Control Initialization Method for H.264 AVC Coding Based on Motion Vector Map and Spatial Complexity at Low Bitrate
The temporal complexity of video sequences can be characterized by motion vector map which consists of motion vectors of each macroblock (MB). In order to obtain the optimal initial QP (quantization parameter) for the various video sequences which have different spatial and temporal complexities, this paper proposes a simple and high performance initial QP determining method based on motion vector map and temporal complexity to decide an initial QP in given target bit rate. The proposed algorithm produces the reconstructed video sequences with outstanding and stable quality. For any video sequences, the initial QP can be easily determined from matrices by target bit rate and mapped spatial complexity using proposed mapping method. Experimental results show that the proposed algorithm can show more outstanding objective and subjective performance than other conventional determining methods
Application of a Bi-Geometric Transparent Composite Model to HEVC: Residual Data Modelling and Rate Control
Among various transforms, the discrete cosine transform (DCT) is the most widely used one in multimedia compression technologies for different image or video coding standards. During the development of image or video compression, a lot of interest has been attracted to understand the statistical distribution of DCT coefficients, which would be useful to design compression techniques, such as quantization, entropy coding and rate control.
Recently, a bi-geometric transparent composite model (BGTCM) has been developed to provide modelling of distribution of DCT coefficients with both simplicity and accuracy. It has been reported that for DCT coefficients obtained from original images, which is applied in image coding, a transparent composite model (TCM) can provide better modelling than Laplacian.
In video compression, such as H.264/AVC, DCT is performed on residual images obtained after prediction with different transform sizes. What's more, in high efficiency video coding(HEVC) which is the newest video coding standard, besides DCT as the main transform tool, discrete sine transform (DST) and transform skip (TS) techniques are possibly performed on residual data in small blocks. As such, the distribution of transformed residual data differs from that of transformed original image data.
In this thesis, the distribution of coefficients, including those from all DCT, DST and TS blocks, is analysed based on BGTCM. To be specific, firstly, the distribution of all the coefficients from the whole frame is examined. Secondly, in HEVC, the entropy coding is implemented based on the new encoding concept, coefficient group (CG) with size 4*4, where quantized coefficients are encoded with context models based on their scan indices in each CG. To simulate the encoding process, coefficients at the same scan indices among different CGs are grouped together to form a set. Distribution of coefficients in each set is analysed. Based on our result, BGTCM is better than other widely used distributions, such as Laplacian and Cauchy distributions, in both x^2 and KL-divergence testing.
Furthermore, unlike the way based on Laplacian and Cauchy distribution, the BGTCM can be used to model rate-quantization (R-Q) and distortion-quantization (D-Q) models without approximation expressions. R-Q and D-Q models based on BGTCM can reflect the distribution of coefficients, which are important in rate control. In video coding, rate control involves these two models to generate a suitable quantization parameter without multi-passes encoding in order to maintain the coding efficiency and to generate required rate to satisfy rate requirement. In this thesis, based on BGTCM, rate control in HEVC is revised with much increase in coding efficiency and decrease in rate fluctuation in terms of rate variance among frames for constant bit rate requirement.1 yea
Algorithms & implementation of advanced video coding standards
Advanced video coding standards have become widely deployed coding techniques used in numerous products, such as broadcast, video conference, mobile television and blu-ray disc, etc. New compression techniques are gradually included in video coding standards so that a 50% compression rate reduction is achievable every five years. However, the trend also has brought many problems, such as, dramatically increased computational complexity, co-existing multiple standards and gradually increased development time. To solve the above problems, this thesis intends to investigate efficient algorithms for the latest video coding standard, H.264/AVC. Two aspects of H.264/AVC standard are inspected in this thesis: (1) Speeding up intra4x4 prediction with parallel architecture. (2) Applying an efficient rate control algorithm based on deviation measure to intra frame. Another aim of this thesis is to work on low-complexity algorithms for MPEG-2 to H.264/AVC transcoder. Three main mapping algorithms and a computational complexity reduction algorithm are focused by this thesis: motion vector mapping, block mapping, field-frame mapping and efficient modes ranking algorithms. Finally, a new video coding framework methodology to reduce development time is examined. This thesis explores the implementation of MPEG-4 simple profile with the RVC framework. A key technique of automatically generating variable length decoder table is solved in this thesis. Moreover, another important video coding standard, DV/DVCPRO, is further modeled by RVC framework. Consequently, besides the available MPEG-4 simple profile and China audio/video standard, a new member is therefore added into the RVC framework family. A part of the research work presented in this thesis is targeted algorithms and implementation of video coding standards. In the wide topic, three main problems are investigated. The results show that the methodologies presented in this thesis are efficient and encourage
Análise do HEVC escalável : desempenho e controlo de débito
Mestrado em Engenharia EletrĂłnica e TelecomunicaçõesEsta dissertação apresenta um estudo da norma de codificação de vĂdeo de alta eficiĂŞncia (HEVC) e a sua extensĂŁo para vĂdeo escalável, SHVC. A norma de vĂdeo SHVC proporciona um melhor desempenho quando codifica várias camadas em simultâneo do que quando se usa o codificador HEVC numa configuração simulcast. Ambos os codificadores de referĂŞncia, tanto para a camada base como para a camada superior usam o mesmo modelo de controlo de dĂ©bito, modelo R-λ, que foi otimizado para o HEVC. Nenhuma otimização de alocação de dĂ©bito entre camadas foi atĂ© ao momento proposto para o modelo de testes (SHM 8) para a escalabilidade do HEVC (SHVC). Derivamos um novo modelo R-λ apropriado para a camada superior e para o caso de escalabilidade espacial, que conduziu a um ganho de BD-dĂ©bito de 1,81% e de BD-PSNR de 0,025 em relação ao modelo de dĂ©bito-distorção existente no SHM do SHVC. Todavia, mostrou-se tambĂ©m nesta dissertação que o proposto modelo de R-λ nĂŁo deve ser usado na camada inferior (camada base) no SHVC e por conseguinte no HEVC.This dissertation provides a study of the High Efficiency Video Coding standard (HEVC) and its scalable extension, SHVC. The SHVC provides a better performance when encoding several layers simultaneously than using an HEVC encoder in a simulcast configuration. Both reference encoders, in the base layer and in the enhancement layer use the same rate control model, R-λ model, which was optimized for HEVC. No optimal bitrate partitioning amongst layers is proposed in scalable HEVC (SHVC) test model (SHM 8). We derived a new R-λ model for the enhancement layer and for the spatial case which led to a DB-rate gain of 1.81% and DB-PSNR gain of 0.025 in relation to the rate-distortion model of SHM-SHVC. Nevertheless, we also show in this dissertation that the proposed model of R-λ should not be used neither in the base layer nor in HEVC
Recommended from our members
Research and developments of Dirac video codec
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.In digital video compression, apart from storage, successful transmission of the compressed video
data over the bandwidth limited erroneous channels is another important issue. To enable a video
codec for broadcasting application, it is required to implement the corresponding coding tools (e.g.
error-resilient coding, rate control etc.). They are normally non-normative parts of a video codec and
hence their specifications are not defined in the standard. In Dirac as well, the original codec is
optimized for storage purpose only and so, several non-normative part of the encoding tools are still
required in order to be able to use in other types of application.
Being the "Research and Developments of the Dirac Video Codec" as the research title, phase I of
the project is mainly focused on the error-resilient transmission over a noisy channel. The error-resilient
coding method used here is a simple and low complex coding scheme which provides the
error-resilient transmission of the compressed video bitstream of Dirac video encoder over the packet
erasure wired network. The scheme combines source and channel coding approach where error-resilient
source coding is achieved by data partitioning in the wavelet transformed domain and
channel coding is achieved through the application of either Rate-Compatible Punctured
Convolutional (RCPC) Code or Turbo Code (TC) using un-equal error protection between header plus
MV and data. The scheme is designed mainly for the packet-erasure channel, i.e. targeted for the
Internet broadcasting application.
But, for a bandwidth limited channel, it is still required to limit the amount of bits generated from
the encoder depending on the available bandwidth in addition to the error-resilient coding. So, in the
2nd phase of the project, a rate control algorithm is presented. The algorithm is based upon the Quality
Factor (QF) optimization method where QF of the encoded video is adaptively changing in order to
achieve average bitrate which is constant over each Group of Picture (GOP). A relation between the
bitrate, R and the QF, which is called Rate-QF (R-QF) model is derived in order to estimate the
optimum QF of the current encoding frame for a given target bitrate, R.
In some applications like video conferencing, real-time encoding and decoding with minimum
delay is crucial, but, the ability to do real-time encoding/decoding is largely determined by the
complexity of the encoder/decoder. As we all know that motion estimation process inside the encoder
is the most time consuming stage. So, reducing the complexity of the motion estimation stage will
certainly give one step closer to the real-time application. So, as a partial contribution toward realtime
application, in the final phase of the research, a fast Motion Estimation (ME) strategy is designed
and implemented. It is the combination of modified adaptive search plus semi-hierarchical way of
motion estimation. The same strategy was implemented in both Dirac and H.264 in order to
investigate its performance on different codecs. Together with this fast ME strategy, a method which
is called partial cost function calculation in order to further reduce down the computational load of the
cost function calculation was presented. The calculation is based upon the pre-defined set of patterns
which were chosen in such a way that they have as much maximum coverage as possible over the
whole block.
In summary, this research work has contributed to the error-resilient transmission of compressed
bitstreams of Dirac video encoder over a bandwidth limited error prone channel. In addition to this,
the final phase of the research has partially contributed toward the real-time application of the Dirac
video codec by implementing a fast motion estimation strategy together with partial cost function
calculation idea.BBC R&D and Brunel University