388 research outputs found

    Comparative analysis of DIRAC PRO-VC-2, H.264 AVC and AVS CHINA-P7

    Get PDF
    Video codec compresses the input video source to reduce storage and transmission bandwidth requirements while maintaining the quality. It is an essential technology for applications, to name a few such as digital television, DVD-Video, mobile TV, videoconferencing and internet video streaming. There are different video codecs used in the industry today and understanding their operation to target certain video applications is the key to optimization. The latest advanced video codec standards have become of great importance in multimedia industries which provide cost-effective encoding and decoding of video and contribute for high compression and efficiency. Currently, H.264 AVC, AVS, and DIRAC are used in the industry to compress video. H.264 codec standard developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG). Audio-video coding standard (AVS) is a working group of audio and video coding standard in China. VC-2, also known as Dirac Pro developed by BBC, is a royalty free technology that anyone can use and has been standardized through the SMPTE as VC-2. H.264 AVC, Dirac Pro, Dirac and AVS-P2 are dedicated to High Definition Video, while AVS-P7 is to mobile video. Out of many standards, this work performs a comparative analysis for the H.264 AVC, DIRAC PRO/SMPTE-VC-2 and AVS-P7 standards in low bitrate region and high bitrate region. Bitrate control and constant QP are the methods which are employed for analysis. Evaluation parameters like Compression Ratio, PSNR and SSIM are used for quality comparison. Depending on target application and available bitrate, order of performance is mentioned to show the preferred codec

    The Optimization of Context-based Binary Arithmetic Coding in AVS2.0

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐ์ •๋ณด๊ณตํ•™๋ถ€, 2016. 2. ์ฑ„์ˆ˜์ต.HEVC(High Efficiency Video Coding)๋Š” ์ง€๋‚œ ์ œ๋„ˆ๋ ˆ์ด์…˜ ํ‘œ์ค€ H.264/AVC๋ณด๋‹ค ์ฝ”๋”ฉ ํšจ์œจ์„ฑ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ๋ฅผ ์œ„ํ•ด์„œ ๊ตญ์ œ ํ‘œ์ค€ ์กฐ์ง๊ณผ(International Standard Organization) ๊ตญ์ œ ์ „๊ธฐ ํ†ต์‹  ์—ฐํ•ฉ(International Telecommunication Union)์— ์˜ํ•ด ๊ณต๋™์œผ๋กœ ๊ฐœ๋ฐœ๋œ ๊ฒƒ์ด๋‹ค. ์ค‘๊ตญ ์ž‘์—… ๊ทธ๋ฃน์ธ AVS(Audio and Video coding standard)๊ฐ€ ์ด๋ฏธ ๋น„์Šทํ•œ ๋…ธ๋ ฅ์„ ๋ฐ”์ณค๋‹ค. ๊ทธ๋“ค์ด ๋งŽ์ด ์ฐฝ์˜์ ์ธ ์ฝ”๋”ฉ ๋„๊ตฌ๋ฅผ ์šด์šฉํ•œ ์ฒซ ์ œ๋„ˆ๋ ˆ์ด์…˜ AVS1์˜ ์••์ถ• ํผํฌ๋จผ์Šค๋ฅผ ๋†’์ด๋„๋ก ์ตœ์‹ ์˜ ์ฝ”๋”ฉ ํ‘œ์ค€(AVS2 or AVS2.0)์„ ๊ฐœ๋ฐœํ–ˆ๋‹ค. AVS2.0 ์ค‘์— ์—”ํŠธ๋กœํ”ผ ์ฝ”๋”ฉ ๋„๊ตฌ๋กœ ์‚ฌ์šฉ๋œ ์ƒํ™ฉ ๊ธฐ๋ฐ˜ 2์ง„๋ฒ• ๊ณ„์‚ฐ ์ฝ”๋”ฉ(CBAC)์€ ์ „์ฒด์  ์ฝ”๋”ฉ ํ‘œ์ค€ ์ค‘์—์„œ ์ค‘์š”ํ•œ ์—ญํ•˜๋ฅผ ํ–ˆ๋‹ค. HEVC์—์„œ ์ฑ„์šฉ๋œ ์ƒํ™ฉ ๊ธฐ๋ฐ˜ ์กฐ์ •์˜ 2์ง„๋ฒ• ๊ณ„์‚ฐ ์ฝ”๋”ฉ(CABAC)๊ณผ ๋น„์Šทํ•˜๊ฒŒ ์ด ๋‘ ์ฝ”๋”ฉ์€ ๋‹ค ์Šน์ˆ˜ ์ž์œ  ๋ฐฉ๋ฒ•์„ ์ฑ„์šฉํ•ด์„œ ๊ณ„์‚ฐ ์ฝ”๋”ฉ์„ ํ˜„์‹คํ•˜๊ฒŒ ๋œ๋‹ค. ๊ทธ๋Ÿฐ๋ฐ ๊ฐ ์ฝ”๋”ฉ๋งˆ๋‹ค ๊ฐ์ž์˜ ํŠน์ •ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ†ตํ•ด ๊ณฑ์…ˆ ๋ฌธ์ œ๋ฅผ ์ฒ˜๋ฆฌํ•œ ๊ฒƒ์ด๋‹ค. ๋ณธ์ง€๋Š” AVS2.0์ค‘์˜ CBAC์— ๋Œ€ํ•œ ๋” ๊นŠ์ด ์ดํ•ด์™€ ๋” ์ข‹์€ ํผํฌ๋จผ์Šค ๊ฐœ์„ ์˜ ๋ชฉ์ ์œผ๋กœ 3๊ฐ€์ง€ ์ธก๋ฉด์˜ ์ผ์„ ํ•œ๋‹ค. ์ฒซ์งธ, ์šฐ๋ฆฌ๊ฐ€ ํ•œ ๋น„๊ต ์ œ๋„๋ฅผ ๋‹ค์ž์ธ์„ ํ•ด์„œ AVS2.0ํ”Œ๋žซํผ ์ค‘์˜ CBAC์™€ CABAC๋ฅผ ๋น„๊ตํ–ˆ๋‹ค. ๋‹ค๋ฅธ ์‹คํ–‰ ์„ธ๋ถ€ ์‚ฌํ•ญ์„ ๊ณ ๋ คํ•˜์—ฌ HEVC์ค‘์˜ CABAC ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ AVS2.0์— ์ด์‹ํ•œ๋‹ค.์˜ˆ๋ฅผ ๋“ค๋ฉด, ์ƒํ™ฉ ๊ธฐ๋ฐ˜ ์ดˆ๊ธฐ์น˜๊ฐ€ ๋‹ค๋ฅด๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” CBAC๊ฐ€ ๋” ์ข‹์€ ์ฝ”๋”ฉ ํผํฌ๋จผ์Šค๋ฅผ ๋‹ฌ์„ฑํ•œ๋‹ค๊ณ  ์•Œ๋ ค์ง„๋‹ค. ๊ทธ ๋‹ค์Œ์— CBAC ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ตœ์ ํ™”์‹œํ‚ค๊ธฐ๋ฅผ ์œ„ํ•ด์„œ ๋ช‡ ๊ฐ€์ง€ ์•„์ด๋””์–ด๋ฅผ ์ œ์•ˆํ•˜๊ฒŒ ๋๋‹ค. ์ฝ”๋”ฉ ํผํฌ๋จผ์Šค ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ์˜ ๋ชฉ์ ์œผ๋กœ ๊ทผ์‚ฌ ์˜ค์ฐจ ๋ณด์ƒ(approximation error compensation)๊ณผ ํ™•๋ฅ  ์ถ”์ • ์ตœ์ ํ™”(probability estimation)๋ฅผ ๋„์ž…ํ–ˆ๋‹ค. ๋‘ ์ฝ”๋”ฉ์€ ๋‹ค๋ฅธ ์•ต์ปค๋ณด๋‹ค ๋‹ค ๋ถ€ํ˜ธํ™”ํšจ์œจ ํ–ฅ์ƒ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ฒŒ ๋๋‹ค. ๋‹ค๋ฅธ ํ•œํŽธ์œผ๋กœ๋Š” ์ฝ”๋”ฉ ์‹œ๊ฐ„์„ ์ค„์ด๊ธฐ๋ฅผ ์œ„ํ•˜์—ฌ ๋ ˆํ…Œ ์ถ”์ • ๋ชจ๋ธ(rate estimation model)๋„ ์ œ์•ˆํ•˜๊ฒŒ ๋œ๋‹ค. ๋ถ€ํ˜ธ์œจ-๋ณ€ํ˜• ์ตœ์ ํ™” ๊ณผ์ •(Rate-Distortion Optimization process)์˜ ๋ถ€ํ˜ธ์œจ-๋ณ€ํ˜• ๋Œ€๊ฐ€ ๊ณ„์‚ฐ(Rate-distortion cost calculation)์„ ์ง€์ง€ํ•˜๋„๋ก ๋ฆฌ์–ผ CBAC ์•Œ๊ณ ๋ฆฌ์ฆ˜(real CBAC algorithm) ๋ ˆํ…Œ ์ถ”์ •(rate estimation)์„ ์‚ฌ์šฉํ–ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ 2์ง„๋ฒ• ๊ณ„์‚ฐ ๋””์ฝ”๋”(decoder) ์‹คํ–‰ ์„ธ๋ถ€ ์‚ฌํ•ญ์„ ์„œ์ˆ ํ–ˆ๋‹ค. AVS2.0 ์ค‘์˜ ์ƒํ™ฉ ๊ธฐ๋ฐ˜ 2์ง„๋ฒ• ๊ณ„์‚ฐ ๋””์ฝ”๋”ฉ(CBAD)์ด ๋„ˆ๋ฌด ๋งŽ์ด ๋ฐ์ดํ„ฐ ์ข…์†์„ฑ๊ณผ ๊ณ„์‚ฐ ๋ถ€๋‹ด์„ ๋„์ž…ํ•˜๊ธฐ ๋•Œ๋ฌธ์— 2๊ฐœ ํ˜น์€ 2๊ฐœ ์ด์ƒ์˜ bin ํ‰ํ–‰ ๋””์ฝ”๋”ฉ์ธ ์ฒ˜๋ฆฌ๋Ÿ‰(CBAD)์„ ๋””์ž์ธ์„ ํ•˜๊ธฐ๊ฐ€ ์–ด๋ ต๋‹ค. 2์ง„๋ฒ• ๊ณ„์‚ฐ ๋””์ฝ”๋”ฉ์˜ one-bin ์ œ๋„๋„ ์—ฌ๊ธฐ์„œ ๋””์ž์ธ์„ ํ•˜๊ฒŒ ๋๋‹ค. ํ˜„์žฌ๊นŒ์ง€ AVS์˜ CBAD ๊ธฐ์กด ๋””์ž์ธ์ด ์—†๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ์šฐ๋ฆฌ์˜ ๋‹ค์ž์ธ์„ ๊ด€๋ จ๋œ HEVC์˜ ์—ฐ๊ตฌ์™€ ๋น„๊ตํ•˜์—ฌ ์„ค๋“๋ ฅ์ด ๊ฐ•ํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์—ˆ๋‹ค.High Efficiency Video Coding (HEVC) was jointly developed by the International Standard Organization (ISO) and International Telecommunication Union (ITU) to improve the coding efficiency further compared with last generation standard H.264/AVC. The similar efforts have been devoted by the Audio and Video coding Standard (AVS) Workgroup of China. They developed the newest video coding standard (AVS2 or AVS2.0) in order to enhance the compression performance of the first generation AVS1 with many novel coding tools. The Context-based Binary Arithmetic Coding (CBAC) as the entropy coding tool used in the AVS2.0 plays a vital role in the overall coding standard. Similar with Context-based Adaptive Binary Arithmetic Coding (CABAC) adopted by HEVC, both of them employ the multiplier-free method to realize the arithmetic coding procedure. However, each of them develops the respective specific algorithm to deal with multiplication problem. In this work, there are three aspects work we have done in order to understand CBAC in AVS2.0 better and try to explore more performance improvement. Firstly, we design a comparison scheme to compare the CBAC and CABAC in the AVS2.0 platform. The CABAC algorithm in HEVC was transplanted into AVS2.0 with consideration about the different implementation detail. For example, the context initialization. The experiment result shows that the CBAC achieves better coding performance. Then several ideas to optimize the CBAC algorithm in AVS2.0 were proposed. For coding performance improvement, the proposed approximation error compensation and probability estimation optimization were introduced. Both of these two coding tools obtain coding efficiency improvement compared with the anchor. In the other aspect, the rate estimation model was proposed to reduce the coding time. Using rate estimation instead of the real CBAC algorithm to support the Rate-distortion cost calculation in Rate-Distortion Optimization (RDO) process, can significantly save the coding time due to the computation complexity of CBAC in nature. Lastly, the binary arithmetic decoder implementation detail was described. Since Context-based Binary Arithmetic Decoding (CBAD) in AVS2.0 introduces too much strong data dependence and computation burden, it is difficult to design a high throughput CBAD with 2 bins or more decoded in parallel. Currently, one-bin scheme of binary arithmetic decoder was designed in this work. Even through there is no previous design for CBAD of AVS up to now, we compare our design with other relative works for HEVC, and our design achieves a compelling experiment result.Chapter 1 Introduction 1 1.1 Research Background 1 1.2 Key Techniques in AVS2.0 3 1.3 Research Contents 9 1.3.1 Performance Comparison of CBAC 9 1.3.2 CBAC Performance Improvement 10 1.3.3 Implementation of Binary Arithmetic Decoder in CBAC 12 1.4 Organization 12 Chapter 2 Entropy Coder CBAC in AVS2.0 14 2.1 Introduction of Entropy Coding 14 2.2 CBAC Overview 16 2.2.1 Binarization and Generation of Bin String 17 2.2.2 Context Modeling and Probability Estimation 19 2.2.3 Binary Arithmetic Coding Engine 22 2.3 Two-level Scan Coding CBAC in AVS2.0 26 2.3.1 Scan order 28 2.3.2 First level coding 30 2.3.3 Second level coding 31 2.4 Summary 32 Chapter 3 Performance Comparison in CBAC 34 3.1 Differences between CBAC and CABAC 34 3.2 Comparison of Two BAC Engines 36 3.2.1 Statistics and initialization of Context Models 37 3.2.2 Adaptive Initialization Probability 40 3.3 Experiment Result 41 3.4 Conclusion 42 Chapter 4 CBAC Performance Improvement 43 4.1 Approximation Error Compensation 43 4.1.1 Error Compensation Table 43 4.1.2 Experiment Result 48 4.2 Probability Estimation Model Optimization 48 4.2.1 Probability Estimation 48 4.2.2 Probability Estimation Model in CBAC 52 4.2.3 The Optimization of Probability Estimation Model in CBAC 53 4.2.4 Experiment Result 56 4.3 Rate Estimation 58 4.3.1 Rate Estimation Model 58 4.3.2 Experiment Result 61 4.4 Conclusion 63 Chapter 5 Implementation of Binary Arithmetic Decoder in CBAC 64 5.1 Architecture of BAD 65 5.1.1 Top Architecture of BAD 66 5.1.2 Range Update Module 67 5.1.3 Offset Update Module 69 5.1.4 Bits Read Module 73 5.1.5 Context Modeling 74 5.2 Complexity of BAD 76 5.3 Conclusion 77 Chapter 6 Conclusion and Further Work 79 6.1 Conclusion 79 6.2 Future Works 80 Reference 82 Appendix 87 A.1. Co-simulation Environment 87 A.1.1 Range Update Module (dRangeUpdate.v) 87 A.1.2 Offset Update Module(dOffsetUpdate.v) 102 A.1.3 Bits Read Module (dReadBits.v) 107 A.1.4 Binary Arithmetic Decoding Top Module (BADTop.v) 115 A.1.5 Test Bench 117Maste

    Video Stream Adaptation In Computer Vision Systems

    Get PDF
    Computer Vision (CV) has been deployed recently in a wide range of applications, including surveillance and automotive industries. According to a recent report, the market for CV technologies will grow to $33.3 billion by 2019. Surveillance and automotive industries share over 20% of this market. This dissertation considers the design of real-time CV systems with live video streaming, especially those over wireless and mobile networks. Such systems include video cameras/sensors and monitoring stations. The cameras should adapt their captured videos based on the events and/or available resources and time requirement. The monitoring station receives video streams from all cameras and run CV algorithms for decisions, warnings, control, and/or other actions. Real-time CV systems have constraints in power, computational, and communicational resources. Most video adaptation techniques considered the video distortion as the primary metric. In CV systems, however, the main objective is enhancing the event/object detection/recognition/tracking accuracy. The accuracy can essentially be thought of as the quality perceived by machines, as opposed to the human perceptual quality. High-Efficiency Video Coding (HEVC) is a recent encoding standard that seeks to address the limited communication bandwidth problem as a result of the popularity of High Definition (HD) videos. Unfortunately, HEVC adopts algorithms that greatly slow down the encoding process, and thus results in complications in real-time systems. This dissertation presents a method for adapting live video streams to limited and varying network bandwidth and energy resources. It analyzes and compares the rate-accuracy and rate-energy characteristics of various video streams adaptation techniques in CV systems. We model the video capturing, encoding, and transmission aspects and then provide an overall model of the power consumed by the video cameras and/or sensors. In addition to modeling the power consumption, we model the achieved bitrate of video encoding. We validate and analyze the power consumption models of each phase as well as the aggregate power consumption model through extensive experiments. The analysis includes examining individual parameters separately and examining the impacts of changing more than one parameter at a time. For HEVC, we develop an algorithm that predicts the size of the block without iterating through the exhaustive Rate Distortion Optimization (RDO) method. We demonstrate the effectiveness of the proposed algorithm in comparison with existing algorithms. The proposed algorithm achieves approximately 5 times the encoding speed of the RDO algorithm and 1.42 times the encoding speed of the fastest analyzed algorithm

    Video coding standards

    Full text link

    Lightweight super resolution network for point cloud geometry compression

    Full text link
    This paper presents an approach for compressing point cloud geometry by leveraging a lightweight super-resolution network. The proposed method involves decomposing a point cloud into a base point cloud and the interpolation patterns for reconstructing the original point cloud. While the base point cloud can be efficiently compressed using any lossless codec, such as Geometry-based Point Cloud Compression, a distinct strategy is employed for handling the interpolation patterns. Rather than directly compressing the interpolation patterns, a lightweight super-resolution network is utilized to learn this information through overfitting. Subsequently, the network parameter is transmitted to assist in point cloud reconstruction at the decoder side. Notably, our approach differentiates itself from lookup table-based methods, allowing us to obtain more accurate interpolation patterns by accessing a broader range of neighboring voxels at an acceptable computational cost. Experiments on MPEG Cat1 (Solid) and Cat2 datasets demonstrate the remarkable compression performance achieved by our method.Comment: 10 pages, 3 figures, 2 tables, and 27 reference

    Increased compression efficiency of AVC and HEVC CABAC by precise statistics estimation

    Get PDF
    The paper presents Improved Adaptive Arithmetic Coding algorithm for application in future video compression technology. The proposed solution is based on the Context-based Adaptive Binary Arithmetic Coding (CABAC) technique and uses the authorsโ€™ mechanism of symbols probability estimation that exploits Context-Tree Weighting (CTW) technique. This paper proposes the version of the algorithm, that allows an arbitrary selection of depth of context trees, when activating the algorithm in the framework of the AVC or HEVC video encoders. The algorithm has been tested in terms of coding efficiency of data and its computational complexity. Results showed, that depending of depth of context trees from 0.1% to 0.86% reduction of bitrate is achieved, when using the algorithm in the HEVC video encoder and 0.4% to 2.3% compression gain in the case of the AVC. The new solution increases complexity of entropy encoder itself, however, this does not translate into increase the complexity of the whole video encoder
    • โ€ฆ
    corecore