127 research outputs found
No tradeoff between confidentiality and performance: An analysis on H.264/SVC partial encryption
Agency for Science, Technology and Research (A*STAR
Efficient algorithms for scalable video coding
A scalable video bitstream specifically designed for the needs of various client terminals,
network conditions, and user demands is much desired in current and future video transmission
and storage systems. The scalable extension of the H.264/AVC standard (SVC) has
been developed to satisfy the new challenges posed by heterogeneous environments, as
it permits a single video stream to be decoded fully or partially with variable quality, resolution,
and frame rate in order to adapt to a specific application. This thesis presents
novel improved algorithms for SVC, including: 1) a fast inter-frame and inter-layer coding
mode selection algorithm based on motion activity; 2) a hierarchical fast mode selection
algorithm; 3) a two-part Rate Distortion (RD) model targeting the properties of different
prediction modes for the SVC rate control scheme; and 4) an optimised Mean Absolute
Difference (MAD) prediction model.
The proposed fast inter-frame and inter-layer mode selection algorithm is based on the
empirical observation that a macroblock (MB) with slow movement is more likely to be
best matched by one in the same resolution layer. However, for a macroblock with fast
movement, motion estimation between layers is required. Simulation results show that
the algorithm can reduce the encoding time by up to 40%, with negligible degradation in
RD performance.
The proposed hierarchical fast mode selection scheme comprises four levels and makes
full use of inter-layer, temporal and spatial correlation aswell as the texture information of
each macroblock. Overall, the new technique demonstrates the same coding performance
in terms of picture quality and compression ratio as that of the SVC standard, yet produces
a saving in encoding time of up to 84%. Compared with state-of-the-art SVC fast mode
selection algorithms, the proposed algorithm achieves a superior computational time reduction
under very similar RD performance conditions.
The existing SVC rate distortion model cannot accurately represent the RD properties of
the prediction modes, because it is influenced by the use of inter-layer prediction. A separate
RD model for inter-layer prediction coding in the enhancement layer(s) is therefore
introduced. Overall, the proposed algorithms improve the average PSNR by up to 0.34dB
or produce an average saving in bit rate of up to 7.78%. Furthermore, the control accuracy
is maintained to within 0.07% on average.
As aMADprediction error always exists and cannot be avoided, an optimisedMADprediction
model for the spatial enhancement layers is proposed that considers the MAD from
previous temporal frames and previous spatial frames together, to achieve a more accurateMADprediction.
Simulation results indicate that the proposedMADprediction model
reduces the MAD prediction error by up to 79% compared with the JVT-W043 implementation
Robust and efficient video/image transmission
The Internet has become a primary medium for information transmission. The unreliability of channel conditions, limited channel bandwidth and explosive growth of information transmission requests, however, hinder its further development. Hence, research on robust and efficient delivery of video/image content is demanding nowadays.
Three aspects of this task, error burst correction, efficient rate allocation and random error protection are investigated in this dissertation. A novel technique, called successive packing, is proposed for combating multi-dimensional (M-D) bursts of errors. A new concept of basis interleaving array is introduced. By combining different basis arrays, effective M-D interleaving can be realized. It has been shown that this algorithm can be implemented only once and yet optimal for a set of error bursts having different sizes for a given two-dimensional (2-D) array.
To adapt to variable channel conditions, a novel rate allocation technique is proposed for FineGranular Scalability (FGS) coded video, in which real data based rate-distortion modeling is developed, constant quality constraint is adopted and sliding window approach is proposed to adapt to the variable channel conditions. By using the proposed technique, constant quality is realized among frames by solving a set of linear functions. Thus, significant computational simplification is achieved compared with the state-of-the-art techniques. The reduction of the overall distortion is obtained at the same time. To combat the random error during the transmission, an unequal error protection (UEP) method and a robust error-concealment strategy are proposed for scalable coded video bitstreams
VHDL Modeling of an H.264/AVC Video Decoder
Transmission and storage of video data has necessitated the development of video com pression techniques. One of today\u27s most widely used video compression techniques is the MPEG-2 standard, which is over ten years old. A task force sponsored by the same groups that developed MPEG-2 has recently finished defining a new standard that is meant to replace MPEG-2 for future video compression applications. This standard, H.264/AVC, uses significantly improved compression techniques. It is capable of providing similar pic ture quality at bit rates of 30-70% less than MPEG-2, depending on the particular video sequence and application [20].
This thesis developed a complete VHDL behavioral model of a video decoder imple menting the Baseline Profile of the H.264/AVC standard. The decoder was verified using a testing environment for comparison with reference software results. Development of a synthesizable hardware description was also shown for two components of the video de coder: the transform unit and the deblocking filter. This demonstrated how a complete video decoder could be developed one module at a time with individual module verifica tion. Analysis was also done to estimate the performance and hardware requirements for a complete implementation on an FPGA device
MASCOT : metadata for advanced scalable video coding tools : final report
The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project
Error resilient H.264 coded video transmission over wireless channels
The H.264/AVC recommendation was first published in 2003 and builds on the concepts of earlier standards such as MPEG-2 and MPEG-4. The H.264 recommendation represents an evolution of the existing video coding standards and
was developed in response to the growing need for higher compression. Even though H.264 provides for greater compression, H.264 compressed video streams are very
prone to channel errors in mobile wireless fading channels such as 3G due to high error rates experienced.
Common video compression techniques include motion compensation, prediction methods, transformation, quantization and entropy coding, which are the common
elements of a hybrid video codecs. The ITU-T recommendation H.264 introduces several new error resilience tools, as well as several new features such as Intra Prediction and Deblocking Filter.
The channel model used for the testing was the Rayleigh Fading channel with the noise component simulated as Additive White Gaussian Noise (AWGN) using QPSK as the modulation technique. The channel was used over several Eb/N0 values to provide similar bit error rates as those found in the literature.
Though further research needs to be conducted, results have shown that when using the H.264 error resilience tools in protecting encoded bitstreams to minor channel errors improvement in the decoded video quality can be observed. The tools did not perform as well with mild and severe channel errors significant as the resultant bitstream was too corrupted. From this, further research in channel coding techniques is needed to determine if the bitstream can be protected from these sorts of error rate
- …