7 research outputs found

    The Optimization of Context-based Binary Arithmetic Coding in AVS2.0

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 전기정보공학부, 2016. 2. 채수익.HEVC(High Efficiency Video Coding)는 지난 제너레이션 표준 H.264/AVC보다 코딩 효율성을 향상시키기를 위해서 국제 표준 조직과(International Standard Organization) 국제 전기 통신 연합(International Telecommunication Union)에 의해 공동으로 개발된 것이다. 중국 작업 그룹인 AVS(Audio and Video coding standard)가 이미 비슷한 노력을 바쳤다. 그들이 많이 창의적인 코딩 도구를 운용한 첫 제너레이션 AVS1의 압축 퍼포먼스를 높이도록 최신의 코딩 표준(AVS2 or AVS2.0)을 개발했다. AVS2.0 중에 엔트로피 코딩 도구로 사용된 상황 기반 2진법 계산 코딩(CBAC)은 전체적 코딩 표준 중에서 중요한 역하를 했다. HEVC에서 채용된 상황 기반 조정의 2진법 계산 코딩(CABAC)과 비슷하게 이 두 코딩은 다 승수 자유 방법을 채용해서 계산 코딩을 현실하게 된다. 그런데 각 코딩마다 각자의 특정한 알고리즘을 통해 곱셈 문제를 처리한 것이다. 본지는 AVS2.0중의 CBAC에 대한 더 깊이 이해와 더 좋은 퍼포먼스 개선의 목적으로 3가지 측면의 일을 한다. 첫째, 우리가 한 비교 제도를 다자인을 해서 AVS2.0플랫폼 중의 CBAC와 CABAC를 비교했다. 다른 실행 세부 사항을 고려하여 HEVC중의 CABAC 알고리즘을 AVS2.0에 이식한다.예를 들면, 상황 기반 초기치가 다르다. 실험 결과는 CBAC가 더 좋은 코딩 퍼포먼스를 달성한다고 알려진다. 그 다음에 CBAC 알고리즘을 최적화시키기를 위해서 몇 가지 아이디어를 제안하게 됐다. 코딩 퍼포먼스 향상시키기의 목적으로 근사 오차 보상(approximation error compensation)과 확률 추정 최적화(probability estimation)를 도입했다. 두 코딩은 다른 앵커보다 다 부호화효율 향상 결과를 얻게 됐다. 다른 한편으로는 코딩 시간을 줄이기를 위하여 레테 추정 모델(rate estimation model)도 제안하게 된다. 부호율-변형 최적화 과정(Rate-Distortion Optimization process)의 부호율-변형 대가 계산(Rate-distortion cost calculation)을 지지하도록 리얼 CBAC 알고리즘(real CBAC algorithm) 레테 추정(rate estimation)을 사용했다. 마지막으로 2진법 계산 디코더(decoder) 실행 세부 사항을 서술했다. AVS2.0 중의 상황 기반 2진법 계산 디코딩(CBAD)이 너무 많이 데이터 종속성과 계산 부담을 도입하기 때문에 2개 혹은 2개 이상의 bin 평행 디코딩인 처리량(CBAD)을 디자인을 하기가 어렵다. 2진법 계산 디코딩의 one-bin 제도도 여기서 디자인을 하게 됐다. 현재까지 AVS의 CBAD 기존 디자인이 없다. 우리가 우리의 다자인을 관련된 HEVC의 연구와 비교하여 설득력이 강한 결과를 얻었다.High Efficiency Video Coding (HEVC) was jointly developed by the International Standard Organization (ISO) and International Telecommunication Union (ITU) to improve the coding efficiency further compared with last generation standard H.264/AVC. The similar efforts have been devoted by the Audio and Video coding Standard (AVS) Workgroup of China. They developed the newest video coding standard (AVS2 or AVS2.0) in order to enhance the compression performance of the first generation AVS1 with many novel coding tools. The Context-based Binary Arithmetic Coding (CBAC) as the entropy coding tool used in the AVS2.0 plays a vital role in the overall coding standard. Similar with Context-based Adaptive Binary Arithmetic Coding (CABAC) adopted by HEVC, both of them employ the multiplier-free method to realize the arithmetic coding procedure. However, each of them develops the respective specific algorithm to deal with multiplication problem. In this work, there are three aspects work we have done in order to understand CBAC in AVS2.0 better and try to explore more performance improvement. Firstly, we design a comparison scheme to compare the CBAC and CABAC in the AVS2.0 platform. The CABAC algorithm in HEVC was transplanted into AVS2.0 with consideration about the different implementation detail. For example, the context initialization. The experiment result shows that the CBAC achieves better coding performance. Then several ideas to optimize the CBAC algorithm in AVS2.0 were proposed. For coding performance improvement, the proposed approximation error compensation and probability estimation optimization were introduced. Both of these two coding tools obtain coding efficiency improvement compared with the anchor. In the other aspect, the rate estimation model was proposed to reduce the coding time. Using rate estimation instead of the real CBAC algorithm to support the Rate-distortion cost calculation in Rate-Distortion Optimization (RDO) process, can significantly save the coding time due to the computation complexity of CBAC in nature. Lastly, the binary arithmetic decoder implementation detail was described. Since Context-based Binary Arithmetic Decoding (CBAD) in AVS2.0 introduces too much strong data dependence and computation burden, it is difficult to design a high throughput CBAD with 2 bins or more decoded in parallel. Currently, one-bin scheme of binary arithmetic decoder was designed in this work. Even through there is no previous design for CBAD of AVS up to now, we compare our design with other relative works for HEVC, and our design achieves a compelling experiment result.Chapter 1 Introduction 1 1.1 Research Background 1 1.2 Key Techniques in AVS2.0 3 1.3 Research Contents 9 1.3.1 Performance Comparison of CBAC 9 1.3.2 CBAC Performance Improvement 10 1.3.3 Implementation of Binary Arithmetic Decoder in CBAC 12 1.4 Organization 12 Chapter 2 Entropy Coder CBAC in AVS2.0 14 2.1 Introduction of Entropy Coding 14 2.2 CBAC Overview 16 2.2.1 Binarization and Generation of Bin String 17 2.2.2 Context Modeling and Probability Estimation 19 2.2.3 Binary Arithmetic Coding Engine 22 2.3 Two-level Scan Coding CBAC in AVS2.0 26 2.3.1 Scan order 28 2.3.2 First level coding 30 2.3.3 Second level coding 31 2.4 Summary 32 Chapter 3 Performance Comparison in CBAC 34 3.1 Differences between CBAC and CABAC 34 3.2 Comparison of Two BAC Engines 36 3.2.1 Statistics and initialization of Context Models 37 3.2.2 Adaptive Initialization Probability 40 3.3 Experiment Result 41 3.4 Conclusion 42 Chapter 4 CBAC Performance Improvement 43 4.1 Approximation Error Compensation 43 4.1.1 Error Compensation Table 43 4.1.2 Experiment Result 48 4.2 Probability Estimation Model Optimization 48 4.2.1 Probability Estimation 48 4.2.2 Probability Estimation Model in CBAC 52 4.2.3 The Optimization of Probability Estimation Model in CBAC 53 4.2.4 Experiment Result 56 4.3 Rate Estimation 58 4.3.1 Rate Estimation Model 58 4.3.2 Experiment Result 61 4.4 Conclusion 63 Chapter 5 Implementation of Binary Arithmetic Decoder in CBAC 64 5.1 Architecture of BAD 65 5.1.1 Top Architecture of BAD 66 5.1.2 Range Update Module 67 5.1.3 Offset Update Module 69 5.1.4 Bits Read Module 73 5.1.5 Context Modeling 74 5.2 Complexity of BAD 76 5.3 Conclusion 77 Chapter 6 Conclusion and Further Work 79 6.1 Conclusion 79 6.2 Future Works 80 Reference 82 Appendix 87 A.1. Co-simulation Environment 87 A.1.1 Range Update Module (dRangeUpdate.v) 87 A.1.2 Offset Update Module(dOffsetUpdate.v) 102 A.1.3 Bits Read Module (dReadBits.v) 107 A.1.4 Binary Arithmetic Decoding Top Module (BADTop.v) 115 A.1.5 Test Bench 117Maste

    Algoritmo de estimação de movimento e sua arquitetura de hardware para HEVC

    Get PDF
    Doutoramento em Engenharia EletrotécnicaVideo coding has been used in applications like video surveillance, video conferencing, video streaming, video broadcasting and video storage. In a typical video coding standard, many algorithms are combined to compress a video. However, one of those algorithms, the motion estimation is the most complex task. Hence, it is necessary to implement this task in real time by using appropriate VLSI architectures. This thesis proposes a new fast motion estimation algorithm and its implementation in real time. The results show that the proposed algorithm and its motion estimation hardware architecture out performs the state of the art. The proposed architecture operates at a maximum operating frequency of 241.6 MHz and is able to process 1080p@60Hz with all possible variables block sizes specified in HEVC standard as well as with motion vector search range of up to ±64 pixels.A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância, vídeo-conferência, video streaming e armazenamento de vídeo. Numa norma de codificação de vídeo, diversos algoritmos são combinados para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de movimento é a tarefa mais complexa. Por isso, é necessário implementar esta tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese propõe um algoritmo de estimação de movimento rápido bem como a sua implementação em tempo real. Os resultados mostram que o algoritmo e a arquitetura de hardware propostos têm melhor desempenho que os existentes. A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é capaz de processar imagens de resolução 1080p@60Hz, com todos os tamanhos de blocos especificados na norma HEVC, bem como um domínio de pesquisa de vetores de movimento até ±64 pixels

    Deep Learning Based Point Cloud Processing and Compression

    Get PDF
    Title from PDF of title page, viewed August 24, 2022Dissertation advisors: Zhu Li and Sejun SongVitaIncludes bibliographical references (pages 116-137)Dissertation (Ph.D)--Department of Computer Science & Electrical Engineering. University of Missouri--Kansas City, 2022A point cloud is a 3D data representation that is becoming increasingly popular. Recent significant advances in 3D sensors and capturing techniques have led to a surge in the usage of 3D point clouds in virtual reality/augmented reality (VR/AR) content creation, as well as 3D sensing for robotics, smart cities, telepresence, and automated driving applications. With an increase in point cloud applications and improved capturing technologies, we now have high-resolution point clouds with millions of points per frame. However, due to the large size of a point cloud, efficient techniques for the transmission, compression, and processing of point cloud content are still widely sought. This thesis addresses multiple issues in the transmission, compression, and processing pipeline for point cloud data. We employ a deep learning solution to process 3D dense as well as sparse point cloud data for both static as well as dynamic contents. Employing deep learning on point cloud data which is inherently sparse is a challenging task. We propose multiple deep learning-based frameworks that address each of the following problems: Point Cloud Compression Artifact Removal. V-PCC is the current state-of-the-art for dynamic point cloud compression. However, at lower bitrates, there are unpleasant artifacts introduced by V-PCC. We propose a deep learning solution for V-PCC artifact removal by leveraging the direction of projection property in V-PCC to remove quantization noise. Point Cloud Geometry Prediction. The current point cloud lossy compression and processing techniques suffer from quantization loss which results in a coarser sub-sampled representation of the point cloud. We solve the problem of points lost during voxelization by performing geometry prediction across spatial scales using deep learning architecture. Point Cloud Geometry Upsampling. Loss of details and irregularities in point cloud geometry can occur during the capturing, processing, and compression pipeline. We present a novel geometry upsampling technique, PU-Dense, which can process a diverse set of point clouds including synthetic mesh-based point clouds, real-world high-resolution point clouds, real-world indoor LiDAR scanned objects, as well as outdoor dynamically acquired LiDAR-based point clouds. Dynamic Point Cloud Interpolation. Dense photorealistic point clouds can depict real-world dynamic objects in high resolution and with a high frame rate. Frame interpolation of such dynamic point clouds would enable the distribution, processing, and compression of such content. We also propose the first point cloud interpolation framework for photorealistic dynamic point clouds. Inter-frame Compression for Dynamic Point Clouds. Efficient point cloud compression is essential for applications like virtual and mixed reality, autonomous driving, and cultural heritage. We propose a deep learning-based inter-frame encoding scheme for dynamic point cloud geometry compression. In each case, our method achieves state-of-the-art results with significant improvement to the current technologies.Introduction -- Point cloud compression artifact removal -- Point cloud geometry prediction -- PU-Dense: sparse tensor-based point cloud geometry upsampling -- Dynamic point cloud interpolation -- Inter-frame compression for dynamic point cloud geometry codin

    Architectures for Adaptive Low-Power Embedded Multimedia Systems

    Get PDF
    This Ph.D. thesis describes novel hardware/software architectures for adaptive low-power embedded multimedia systems. Novel techniques for run-time adaptive energy management are proposed, such that both HW & SW adapt together to react to the unpredictable scenarios. A complete power-aware H.264 video encoder was developed. Comparison with state-of-the-art demonstrates significant energy savings while meeting the performance constraint and keeping the video quality degradation unnoticeable

    Compressão de imagem e vídeo com controlador ARM

    Get PDF
    Este trabalho foi desenvolvido num estágio na empresa ABS GmbH sucursal em Portugal, e teve como foco a compressão de imagem e vídeo com os padrões JPEG e H.264, respetivamente. Foi utilizada a plataforma LeopardBoard DM368, com um controlador ARM9. A análise do desempenho de compressão de ambos os padrões foi realizada através de programas em linguagem C, para execução no processador DM368. O programa para compressão de imagem recebe como parâmetros de entrada o nome e a resolução da imagem a comprimir, e comprime-a com 10 níveis de quantização diferentes. Os resultados mostram que é possível obter uma velocidade de compressão até 73 fps (frames per second) para a resolução 1280x720, e que imagens de boa qualidade podem ser obtidas com rácios de compressão até cerca de 22:1. No programa para compressão de vídeo, o codificador está configurado de acordo com as recomendações para as seguintes aplicações: videoconferência, videovigilância, armazenamento e broadcasting/streaming. As configurações em cada processo de codificação, o nome do ficheiro, o número de frames e a resolução do mesmo representam os parâmetros de entrada. Para a resolução 1280x720, foram obtidas velocidades de compressão até cerca de 68 fps, enquanto para a resolução 1920x1088 esse valor foi cerca de 30 fps. Foi ainda desenvolvida uma aplicação com capacidades para capturar imagens ou vídeos, aplicar processamento de imagem, compressão, armazenamento e transmissão para uma saída DVI (Digital Visual Interface). O processamento de imagem em software permite melhorar dinamicamente as imagens, e a taxa média de captura, compressão e armazenamento é cerca de 5 fps para a resolução 1280x720, adequando-se à captura de imagens individuais. Sem processamento em software, a taxa sobe para cerca de 23 fps para a resolução 1280x720, sendo cerca de 28 fps para a resolução 1280x1088, o que é favorável à captura de vídeo

    High-performance hardware accelerators for image processing in space applications

    Get PDF
    Mars is a hard place to reach. While there have been many notable success stories in getting probes to the Red Planet, the historical record is full of bad news. The success rate for actually landing on the Martian surface is even worse, roughly 30%. This low success rate must be mainly credited to the Mars environment characteristics. In the Mars atmosphere strong winds frequently breath. This phenomena usually modifies the lander descending trajectory diverging it from the target one. Moreover, the Mars surface is not the best place where performing a safe land. It is pitched by many and close craters and huge stones, and characterized by huge mountains and hills (e.g., Olympus Mons is 648 km in diameter and 27 km tall). For these reasons a mission failure due to a landing in huge craters, on big stones or on part of the surface characterized by a high slope is highly probable. In the last years, all space agencies have increased their research efforts in order to enhance the success rate of Mars missions. In particular, the two hottest research topics are: the active debris removal and the guided landing on Mars. The former aims at finding new methods to remove space debris exploiting unmanned spacecrafts. These must be able to autonomously: detect a debris, analyses it, in order to extract its characteristics in terms of weight, speed and dimension, and, eventually, rendezvous with it. In order to perform these tasks, the spacecraft must have high vision capabilities. In other words, it must be able to take pictures and process them with very complex image processing algorithms in order to detect, track and analyse the debris. The latter aims at increasing the landing point precision (i.e., landing ellipse) on Mars. Future space-missions will increasingly adopt Video Based Navigation systems to assist the entry, descent and landing (EDL) phase of space modules (e.g., spacecrafts), enhancing the precision of automatic EDL navigation systems. For instance, recent space exploration missions, e.g., Spirity, Oppurtunity, and Curiosity, made use of an EDL procedure aiming at following a fixed and precomputed descending trajectory to reach a precise landing point. This approach guarantees a maximum landing point precision of 20 km. By comparing this data with the Mars environment characteristics, it is possible to understand how the mission failure probability still remains really high. A very challenging problem is to design an autonomous-guided EDL system able to even more reduce the landing ellipse, guaranteeing to avoid the landing in dangerous area of Mars surface (e.g., huge craters or big stones) that could lead to the mission failure. The autonomous behaviour of the system is mandatory since a manual driven approach is not feasible due to the distance between Earth and Mars. Since this distance varies from 56 to 100 million of km approximately due to the orbit eccentricity, even if a signal transmission at the light speed could be possible, in the best case the transmission time would be around 31 minutes, exceeding so the overall duration of the EDL phase. In both applications, algorithms must guarantee self-adaptability to the environmental conditions. Since the Mars (and in general the space) harsh conditions are difficult to be predicted at design time, these algorithms must be able to automatically tune the internal parameters depending on the current conditions. Moreover, real-time performances are another key factor. Since a software implementation of these computational intensive tasks cannot reach the required performances, these algorithms must be accelerated via hardware. For this reasons, this thesis presents my research work done on advanced image processing algorithms for space applications and the associated hardware accelerators. My research activity has been focused on both the algorithm and their hardware implementations. Concerning the first aspect, I mainly focused my research effort to integrate self-adaptability features in the existing algorithms. While concerning the second, I studied and validated a methodology to efficiently develop, verify and validate hardware components aimed at accelerating video-based applications. This approach allowed me to develop and test high performance hardware accelerators that strongly overcome the performances of the actual state-of-the-art implementations. The thesis is organized in four main chapters. Chapter 2 starts with a brief introduction about the story of digital image processing. The main content of this chapter is the description of space missions in which digital image processing has a key role. A major effort has been spent on the missions in which my research activity has a substantial impact. In particular, for these missions, this chapter deeply analizes and evaluates the state-of-the-art approaches and algorithms. Chapter 3 analyzes and compares the two technologies used to implement high performances hardware accelerators, i.e., Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Thanks to this information the reader may understand the main reasons behind the decision of space agencies to exploit FPGAs instead of ASICs for high-performance hardware accelerators in space missions, even if FPGAs are more sensible to Single Event Upsets (i.e., transient error induced on hardware component by alpha particles and solar radiation in space). Moreover, this chapter deeply describes the three available space-grade FPGA technologies (i.e., One-time Programmable, Flash-based, and SRAM-based), and the main fault-mitigation techniques against SEUs that are mandatory for employing space-grade FPGAs in actual missions. Chapter 4 describes one of the main contribution of my research work: a library of high-performance hardware accelerators for image processing in space applications. The basic idea behind this library is to offer to designers a set of validated hardware components able to strongly speed up the basic image processing operations commonly used in an image processing chain. In other words, these components can be directly used as elementary building blocks to easily create a complex image processing system, without wasting time in the debug and validation phase. This library groups the proposed hardware accelerators in IP-core families. The components contained in a same family share the same provided functionality and input/output interface. This harmonization in the I/O interface enables to substitute, inside a complex image processing system, components of the same family without requiring modifications to the system communication infrastructure. In addition to the analysis of the internal architecture of the proposed components, another important aspect of this chapter is the methodology used to develop, verify and validate the proposed high performance image processing hardware accelerators. This methodology involves the usage of different programming and hardware description languages in order to support the designer from the algorithm modelling up to the hardware implementation and validation. Chapter 5 presents the proposed complex image processing systems. In particular, it exploits a set of actual case studies, associated with the most recent space agency needs, to show how the hardware accelerator components can be assembled to build a complex image processing system. In addition to the hardware accelerators contained in the library, the described complex system embeds innovative ad-hoc hardware components and software routines able to provide high performance and self-adaptable image processing functionalities. To prove the benefits of the proposed methodology, each case study is concluded with a comparison with the current state-of-the-art implementations, highlighting the benefits in terms of performances and self-adaptability to the environmental conditions
    corecore