13,253 research outputs found
Complexity Analysis Of Next-Generation VVC Encoding and Decoding
While the next generation video compression standard, Versatile Video Coding
(VVC), provides a superior compression efficiency, its computational complexity
dramatically increases. This paper thoroughly analyzes this complexity for both
encoder and decoder of VVC Test Model 6, by quantifying the complexity
break-down for each coding tool and measuring the complexity and memory
requirements for VVC encoding/decoding. These extensive analyses are performed
for six video sequences of 720p, 1080p, and 2160p, under Low-Delay (LD),
Random-Access (RA), and All-Intra (AI) conditions (a total of 320
encoding/decoding). Results indicate that the VVC encoder and decoder are 5x
and 1.5x more complex compared to HEVC in LD, and 31x and 1.8x in AI,
respectively. Detailed analysis of coding tools reveals that in LD on average,
motion estimation tools with 53%, transformation and quantization with 22%, and
entropy coding with 7% dominate the encoding complexity. In decoding, loop
filters with 30%, motion compensation with 20%, and entropy decoding with 16%,
are the most complex modules. Moreover, the required memory bandwidth for VVC
encoding/decoding are measured through memory profiling, which are 30x and 3x
of HEVC. The reported results and insights are a guide for future research and
implementations of energy-efficient VVC encoder/decoder.Comment: IEEE ICIP 202
Effect of Video Streaming Space–Time Characteristics on Quality of Transmission over Wireless Telecommunication Networks
The spate in popularity of multimedia applications
has led to the need for optimization of bandwidth allocation
and usage in telecommunication networks. Modern
telecommunication networks should by their definition be able to maintain the quality of different applications with different Quality of Service (QoS) levels. QoS requirements are generally dependent on the parameters of network and
application layers of the OSI model. At the application layer QoS depends on factors such as resolution, bit rate, frame rate, video type, audio codecs, etc. At the network layer, distortions such as delay, jitter, packet loss, etc. are introduced. This paper presents simulation results of modeling video streaming over wireless communications networks. The differences in spatial and time characteristics of the different subject groups
were taken into account. Analysis of the influence of bit error rate (BER) and bit rate for video quality is also presented.
Simulation showed that different video subject groups affect
the perceived quality differently when transmitted over
networks. We show conclusively that in a transmission network
with a small error probabilities (BER = 10-6, BER = 10-5), the
minimum bit rate (128 kbps) guarantees an acceptable video
quality, corresponding to MOS > 3 for all types of frames
Data compression techniques applied to high resolution high frame rate video technology
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended
Improved Method to Select the Lagrange Multiplier for Rate-Distortion Based Motion Estimation in Video Coding
The motion estimation (ME) process used in the H.264/AVC reference software is based on minimizing a cost function that involves two terms (distortion and rate) that are properly balanced through a Lagrangian parameter, usually denoted as lambda(motion). In this paper we propose an algorithm to improve the conventional way of estimating lambda(motion) and, consequently, the ME process. First, we show that the conventional estimation of lambda(motion) turns out to be significantly less accurate when ME-compromising events, which make the ME process to perform poorly, happen. Second, with the aim of improving the coding efficiency in these cases, an efficient algorithm is proposed that allows the encoder to choose between three different values of lambda(motion) for the Inter 16x16 partition size. To be more precise, for this partition size, the proposed algorithm allows the encoder to additionally test lambda(motion) = 0 and lambda(motion) arbitrarily large, which corresponds to minimum distortion and minimum rate solutions, respectively. By testing these two extreme values, the algorithm avoids making large ME errors. The experimental results on video segments exhibiting this type of ME-compromising events reveal an average rate reduction of 2.20% for the same coding quality with respect to the JM15.1 reference software of H.264/AVC. The algorithm has been also tested in comparison with a state-of-the-art algorithm called context adaptive Lagrange multiplier. Additionally, two illustrative examples of the subjective performance improvement are provided.This work has been partially supported by the National Grant TEC2011-26807 of the Spanish Ministry of Science and Innovation.Publicad
New pixel-DCT domain coding technique for object based and frame based prediction error
2004-2005 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe
- …