142,088 research outputs found
Complexity management of H.264/AVC video compression.
The H. 264/AVC video coding standard offers significantly improved compression efficiency and flexibility compared to previous standards. However, the high computational complexity of H. 264/AVC is a problem for codecs running on low-power hand held devices and general purpose computers. This thesis presents new techniques to reduce, control and manage the computational complexity of an H. 264/AVC codec. A new complexity reduction algorithm for H. 264/AVC is developed. This algorithm predicts "skipped" macroblocks prior to motion estimation by estimating a Lagrange ratedistortion cost function. Complexity savings are achieved by not processing the macroblocks that are predicted as "skipped". The Lagrange multiplier is adaptively modelled as a function of the quantisation parameter and video sequence statistics. Simulation results show that this algorithm achieves significant complexity savings with a negligible loss in rate-distortion performance. The complexity reduction algorithm is further developed to achieve complexity-scalable control of the encoding process. The Lagrangian cost estimation is extended to incorporate computational complexity. A target level of complexity is maintained by using a feedback algorithm to update the Lagrange multiplier associated with complexity. Results indicate that scalable complexity control of the encoding process can be achieved whilst maintaining near optimal complexity-rate-distortion performance. A complexity management framework is proposed for maximising the perceptual quality of coded video in a real-time processing-power constrained environment. A real-time frame-level control algorithm and a per-frame complexity control algorithm are combined in order to manage the encoding process such that a high frame rate is maintained without significantly losing frame quality. Subjective evaluations show that the managed complexity approach results in higher perceptual quality compared to a reference encoder that drops frames in computationally constrained situations. These novel algorithms are likely to be useful in implementing real-time H. 264/AVC standard encoders in computationally constrained environments such as low-power mobile devices and general purpose computers
Rate distortion control in digital video coding
Lossy compression is widely applied for coding visual information in applications such as entertainment in order to achieve a high compression ratio. In this case, the video quality worsens as the compression ratio increases. Rate control tries to use the bit budget properly so the visual distortion is minimized. Rate control for H.264, the state-of-the-art hybrid video coder, is investigated. Based on the Rate-Distortion (R-D) slope analysis, an operational rate distortion optimization scheme for H.264 using Lagrangian multiplier method is proposed. The scheme tries to find the best path of quantization parameter (OP) options at each macroblock. The proposed scheme provides a smoother rate control that is able to cover a wider range of bit rates and for many sequences it outperforms the H.264 (JM92 version) rate control scheme in the sense of PSNR. The Bath University Matching Pursuit (BUMP) project develops a new matching pursuit (MP) technique as an alternative to transform video coders. By combining MP with precision limited quantization (PLO) and multi-pass embedded residual group encoder (MERGE), a very efficient coder is built that is able to produce an embedded bit stream, which is highly desirable for rate control. The problem of optimal bit allocation with a BUMP based video coder is investigated. An ad hoc scheme of simply limiting the maximum atom number shows an obvious performance improvement, which indicates a potential of efficiency improvement. An in depth study on the bit Rate-Atom character has been carried out and a rate estimation model has been proposed. The model gives a theoretical description of how the oit number changes. An adaptive rate estimation algorithm has been proposed. Experiments show that the algorithm provides extremely high estimation accuracy. The proposed R-D source model is then applied to bit allocation in the BUMP based video coder. An R-D slope unifying scheme was applied to optimize the performance of the coder'. It adopts the R-D model and fits well within the BUMP coder. The optimization can be performed in a straightforward way. Experiments show that the proposed method greatly improved performance of BUMP video coder, and outperforms H.264 in low and medium bit rates by up to 2 dB.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Efficient algorithms for scalable video coding
A scalable video bitstream specifically designed for the needs of various client terminals,
network conditions, and user demands is much desired in current and future video transmission
and storage systems. The scalable extension of the H.264/AVC standard (SVC) has
been developed to satisfy the new challenges posed by heterogeneous environments, as
it permits a single video stream to be decoded fully or partially with variable quality, resolution,
and frame rate in order to adapt to a specific application. This thesis presents
novel improved algorithms for SVC, including: 1) a fast inter-frame and inter-layer coding
mode selection algorithm based on motion activity; 2) a hierarchical fast mode selection
algorithm; 3) a two-part Rate Distortion (RD) model targeting the properties of different
prediction modes for the SVC rate control scheme; and 4) an optimised Mean Absolute
Difference (MAD) prediction model.
The proposed fast inter-frame and inter-layer mode selection algorithm is based on the
empirical observation that a macroblock (MB) with slow movement is more likely to be
best matched by one in the same resolution layer. However, for a macroblock with fast
movement, motion estimation between layers is required. Simulation results show that
the algorithm can reduce the encoding time by up to 40%, with negligible degradation in
RD performance.
The proposed hierarchical fast mode selection scheme comprises four levels and makes
full use of inter-layer, temporal and spatial correlation aswell as the texture information of
each macroblock. Overall, the new technique demonstrates the same coding performance
in terms of picture quality and compression ratio as that of the SVC standard, yet produces
a saving in encoding time of up to 84%. Compared with state-of-the-art SVC fast mode
selection algorithms, the proposed algorithm achieves a superior computational time reduction
under very similar RD performance conditions.
The existing SVC rate distortion model cannot accurately represent the RD properties of
the prediction modes, because it is influenced by the use of inter-layer prediction. A separate
RD model for inter-layer prediction coding in the enhancement layer(s) is therefore
introduced. Overall, the proposed algorithms improve the average PSNR by up to 0.34dB
or produce an average saving in bit rate of up to 7.78%. Furthermore, the control accuracy
is maintained to within 0.07% on average.
As aMADprediction error always exists and cannot be avoided, an optimisedMADprediction
model for the spatial enhancement layers is proposed that considers the MAD from
previous temporal frames and previous spatial frames together, to achieve a more accurateMADprediction.
Simulation results indicate that the proposedMADprediction model
reduces the MAD prediction error by up to 79% compared with the JVT-W043 implementation
Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures
Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, SlepianâWolf and WynerâZiv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs
Joint Reconstruction of Multi-view Compressed Images
The distributed representation of correlated multi-view images is an
important problem that arise in vision sensor networks. This paper concentrates
on the joint reconstruction problem where the distributively compressed
correlated images are jointly decoded in order to improve the reconstruction
quality of all the compressed images. We consider a scenario where the images
captured at different viewpoints are encoded independently using common coding
solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among
different cameras. A central decoder first estimates the underlying correlation
model from the independently compressed images which will be used for the joint
signal recovery. The joint reconstruction is then cast as a constrained convex
optimization problem that reconstructs total-variation (TV) smooth images that
comply with the estimated correlation model. At the same time, we add
constraints that force the reconstructed images to be consistent with their
compressed versions. We show by experiments that the proposed joint
reconstruction scheme outperforms independent reconstruction in terms of image
quality, for a given target bit rate. In addition, the decoding performance of
our proposed algorithm compares advantageously to state-of-the-art distributed
coding schemes based on disparity learning and on the DISCOVER
Q-AIMD: A Congestion Aware Video Quality Control Mechanism
Following the constant increase of the multimedia traffic, it seems necessary to allow transport protocols to be aware of the video quality of the transmitted flows rather than the throughput. This paper proposes a novel transport mechanism adapted to video flows. Our proposal, called Q-AIMD for video quality AIMD (Additive Increase Multiplicative Decrease), enables fairness in video quality while transmitting multiple video flows. Targeting video quality fairness allows improving the overall video quality for all transmitted flows, especially when the transmitted videos provide various types of content with different spatial resolutions. In addition, Q-AIMD mitigates the occurrence of network congestion events, and dissolves the congestion whenever it occurs by decreasing the video quality and hence the bitrate. Using different video quality metrics, Q-AIMD is evaluated with different video contents and spatial resolutions. Simulation results show that Q-AIMD allows an improved overall video quality among the multiple transmitted video flows compared to a throughput-based congestion control by decreasing significantly the quality discrepancy between them
An efficient fast mode decision algorithm for H.264/AVC intra/inter predictions
H.264/AVC is the newest video coding standard, which outperforms the former standards in video coding efficiency in terms of improved video quality and decreased bitrate. Variable block size based mode decision (MD) with rate distortion optimization (RDO) is one of the most impressive new techniques employed in H.264/AVC. However, the improvement on performance is achieved at the expense of significantly increased computational complexity, which is a key challenge for real-time applications. An efficient fast mode decision algorithm is then proposed in this paper. By exploiting the correlation between macroblocks and the statistical characteristics of sub-macroblock in MD, the video encoding time can be reduced 52.19% on average. Furthermore, the motion speed based adjustment scheme was introduced to minimize the degradation of performanc
- âŠ