184 research outputs found

    A Deeply Pipelined CABAC Decoder for HEVC Supporting Level 6.2 High-tier Applications

    Get PDF
    High Efficiency Video Coding (HEVC) is the latest video coding standard that specifies video resolutions up to 8K Ultra-HD (UHD) at 120 fps to support the next decade of video applications. This results in high-throughput requirements for the context adaptive binary arithmetic coding (CABAC) entropy decoder, which was already a well-known bottleneck in H.264/AVC. To address the throughput challenges, several modifications were made to CABAC during the standardization of HEVC. This work leverages these improvements in the design of a high-throughput HEVC CABAC decoder. It also supports the high-level parallel processing tools introduced by HEVC, including tile and wavefront parallel processing. The proposed design uses a deeply pipelined architecture to achieve a high clock rate. Additional techniques such as the state prefetch logic, latched-based context memory, and separate finite state machines are applied to minimize stall cycles, while multibypass- bin decoding is used to further increase the throughput. The design is implemented in an IBM 45nm SOI process. After place-and-route, its operating frequency reaches 1.6 GHz. The corresponding throughputs achieve up to 1696 and 2314 Mbin/s under common and theoretical worst-case test conditions, respectively. The results show that the design is sufficient to decode in real-time high-tier video bitstreams at level 6.2 (8K UHD at 120 fps), or main-tier bitstreams at level 5.1 (4K UHD at 60 fps) for applications requiring sub-frame latency, such as video conferencing

    Parallel algorithms and architectures for low power video decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 197-204).Parallelism coupled with voltage scaling is an effective approach to achieve high processing performance with low power consumption. This thesis presents parallel architectures and algorithms designed to deliver the power and performance required for current and next generation video coding. Coding efficiency, area cost and scalability are also addressed. First, a low power video decoder is presented for the current state-of-the-art video coding standard H.264/AVC. Parallel architectures are used along with voltage scaling to deliver high definition (HD) decoding at low power levels. Additional architectural optimizations such as reducing memory accesses and multiple frequency/voltage domains are also described. An H.264/AVC Baseline decoder test chip was fabricated in 65-nm CMOS. It can operate at 0.7 V for HD (720p, 30 fps) video decoding and with a measured power of 1.8 mW. The highly scalable decoder can tradeoff power and performance across >100x range. Second, this thesis demonstrates how serial algorithms, such as Context-based Adaptive Binary Arithmetic Coding (CABAC), can be redesigned for parallel architectures to enable high throughput with low coding efficiency cost. A parallel algorithm called the Massively Parallel CABAC (MP-CABAC) is presented that uses syntax element partitions and interleaved entropy slices to achieve better throughput-coding efficiency and throughput-area tradeoffs than H.264/AVC. The parallel algorithm also improves scalability by providing a third dimension to tradeoff coding efficiency for power and performance. Finally, joint algorithm-architecture optimizations are used to increase performance and reduce area with almost no coding penalty. The MP-CABAC is mapped to a highly parallel architecture with 80 parallel engines, which together delivers >10x higher throughput than existing H.264/AVC CABAC implementations. A MP-CABAC test chip was fabricated in 65-nm CMOS to demonstrate the power-performance-coding efficiency tradeoff.by Vivienne. Sze.Ph.D

    Decoder Hardware Architecture for HEVC

    Get PDF
    This chapter provides an overview of the design challenges faced in the implementation of hardware HEVC decoders. These challenges can be attributed to the larger and diverse coding block sizes and transform sizes, the larger interpolation filter for motion compensation, the increased number of steps in intra prediction and the introduction of a new in-loop filter. Several solutions to address these implementation challenges are discussed. As a reference, results for an HEVC decoder test chip are also presented.Texas Instruments Incorporate

    The Optimization of Context-based Binary Arithmetic Coding in AVS2.0

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 전기정보공학부, 2016. 2. 채수익.HEVC(High Efficiency Video Coding)는 지난 제너레이션 표준 H.264/AVC보다 코딩 효율성을 향상시키기를 위해서 국제 표준 조직과(International Standard Organization) 국제 전기 통신 연합(International Telecommunication Union)에 의해 공동으로 개발된 것이다. 중국 작업 그룹인 AVS(Audio and Video coding standard)가 이미 비슷한 노력을 바쳤다. 그들이 많이 창의적인 코딩 도구를 운용한 첫 제너레이션 AVS1의 압축 퍼포먼스를 높이도록 최신의 코딩 표준(AVS2 or AVS2.0)을 개발했다. AVS2.0 중에 엔트로피 코딩 도구로 사용된 상황 기반 2진법 계산 코딩(CBAC)은 전체적 코딩 표준 중에서 중요한 역하를 했다. HEVC에서 채용된 상황 기반 조정의 2진법 계산 코딩(CABAC)과 비슷하게 이 두 코딩은 다 승수 자유 방법을 채용해서 계산 코딩을 현실하게 된다. 그런데 각 코딩마다 각자의 특정한 알고리즘을 통해 곱셈 문제를 처리한 것이다. 본지는 AVS2.0중의 CBAC에 대한 더 깊이 이해와 더 좋은 퍼포먼스 개선의 목적으로 3가지 측면의 일을 한다. 첫째, 우리가 한 비교 제도를 다자인을 해서 AVS2.0플랫폼 중의 CBAC와 CABAC를 비교했다. 다른 실행 세부 사항을 고려하여 HEVC중의 CABAC 알고리즘을 AVS2.0에 이식한다.예를 들면, 상황 기반 초기치가 다르다. 실험 결과는 CBAC가 더 좋은 코딩 퍼포먼스를 달성한다고 알려진다. 그 다음에 CBAC 알고리즘을 최적화시키기를 위해서 몇 가지 아이디어를 제안하게 됐다. 코딩 퍼포먼스 향상시키기의 목적으로 근사 오차 보상(approximation error compensation)과 확률 추정 최적화(probability estimation)를 도입했다. 두 코딩은 다른 앵커보다 다 부호화효율 향상 결과를 얻게 됐다. 다른 한편으로는 코딩 시간을 줄이기를 위하여 레테 추정 모델(rate estimation model)도 제안하게 된다. 부호율-변형 최적화 과정(Rate-Distortion Optimization process)의 부호율-변형 대가 계산(Rate-distortion cost calculation)을 지지하도록 리얼 CBAC 알고리즘(real CBAC algorithm) 레테 추정(rate estimation)을 사용했다. 마지막으로 2진법 계산 디코더(decoder) 실행 세부 사항을 서술했다. AVS2.0 중의 상황 기반 2진법 계산 디코딩(CBAD)이 너무 많이 데이터 종속성과 계산 부담을 도입하기 때문에 2개 혹은 2개 이상의 bin 평행 디코딩인 처리량(CBAD)을 디자인을 하기가 어렵다. 2진법 계산 디코딩의 one-bin 제도도 여기서 디자인을 하게 됐다. 현재까지 AVS의 CBAD 기존 디자인이 없다. 우리가 우리의 다자인을 관련된 HEVC의 연구와 비교하여 설득력이 강한 결과를 얻었다.High Efficiency Video Coding (HEVC) was jointly developed by the International Standard Organization (ISO) and International Telecommunication Union (ITU) to improve the coding efficiency further compared with last generation standard H.264/AVC. The similar efforts have been devoted by the Audio and Video coding Standard (AVS) Workgroup of China. They developed the newest video coding standard (AVS2 or AVS2.0) in order to enhance the compression performance of the first generation AVS1 with many novel coding tools. The Context-based Binary Arithmetic Coding (CBAC) as the entropy coding tool used in the AVS2.0 plays a vital role in the overall coding standard. Similar with Context-based Adaptive Binary Arithmetic Coding (CABAC) adopted by HEVC, both of them employ the multiplier-free method to realize the arithmetic coding procedure. However, each of them develops the respective specific algorithm to deal with multiplication problem. In this work, there are three aspects work we have done in order to understand CBAC in AVS2.0 better and try to explore more performance improvement. Firstly, we design a comparison scheme to compare the CBAC and CABAC in the AVS2.0 platform. The CABAC algorithm in HEVC was transplanted into AVS2.0 with consideration about the different implementation detail. For example, the context initialization. The experiment result shows that the CBAC achieves better coding performance. Then several ideas to optimize the CBAC algorithm in AVS2.0 were proposed. For coding performance improvement, the proposed approximation error compensation and probability estimation optimization were introduced. Both of these two coding tools obtain coding efficiency improvement compared with the anchor. In the other aspect, the rate estimation model was proposed to reduce the coding time. Using rate estimation instead of the real CBAC algorithm to support the Rate-distortion cost calculation in Rate-Distortion Optimization (RDO) process, can significantly save the coding time due to the computation complexity of CBAC in nature. Lastly, the binary arithmetic decoder implementation detail was described. Since Context-based Binary Arithmetic Decoding (CBAD) in AVS2.0 introduces too much strong data dependence and computation burden, it is difficult to design a high throughput CBAD with 2 bins or more decoded in parallel. Currently, one-bin scheme of binary arithmetic decoder was designed in this work. Even through there is no previous design for CBAD of AVS up to now, we compare our design with other relative works for HEVC, and our design achieves a compelling experiment result.Chapter 1 Introduction 1 1.1 Research Background 1 1.2 Key Techniques in AVS2.0 3 1.3 Research Contents 9 1.3.1 Performance Comparison of CBAC 9 1.3.2 CBAC Performance Improvement 10 1.3.3 Implementation of Binary Arithmetic Decoder in CBAC 12 1.4 Organization 12 Chapter 2 Entropy Coder CBAC in AVS2.0 14 2.1 Introduction of Entropy Coding 14 2.2 CBAC Overview 16 2.2.1 Binarization and Generation of Bin String 17 2.2.2 Context Modeling and Probability Estimation 19 2.2.3 Binary Arithmetic Coding Engine 22 2.3 Two-level Scan Coding CBAC in AVS2.0 26 2.3.1 Scan order 28 2.3.2 First level coding 30 2.3.3 Second level coding 31 2.4 Summary 32 Chapter 3 Performance Comparison in CBAC 34 3.1 Differences between CBAC and CABAC 34 3.2 Comparison of Two BAC Engines 36 3.2.1 Statistics and initialization of Context Models 37 3.2.2 Adaptive Initialization Probability 40 3.3 Experiment Result 41 3.4 Conclusion 42 Chapter 4 CBAC Performance Improvement 43 4.1 Approximation Error Compensation 43 4.1.1 Error Compensation Table 43 4.1.2 Experiment Result 48 4.2 Probability Estimation Model Optimization 48 4.2.1 Probability Estimation 48 4.2.2 Probability Estimation Model in CBAC 52 4.2.3 The Optimization of Probability Estimation Model in CBAC 53 4.2.4 Experiment Result 56 4.3 Rate Estimation 58 4.3.1 Rate Estimation Model 58 4.3.2 Experiment Result 61 4.4 Conclusion 63 Chapter 5 Implementation of Binary Arithmetic Decoder in CBAC 64 5.1 Architecture of BAD 65 5.1.1 Top Architecture of BAD 66 5.1.2 Range Update Module 67 5.1.3 Offset Update Module 69 5.1.4 Bits Read Module 73 5.1.5 Context Modeling 74 5.2 Complexity of BAD 76 5.3 Conclusion 77 Chapter 6 Conclusion and Further Work 79 6.1 Conclusion 79 6.2 Future Works 80 Reference 82 Appendix 87 A.1. Co-simulation Environment 87 A.1.1 Range Update Module (dRangeUpdate.v) 87 A.1.2 Offset Update Module(dOffsetUpdate.v) 102 A.1.3 Bits Read Module (dReadBits.v) 107 A.1.4 Binary Arithmetic Decoding Top Module (BADTop.v) 115 A.1.5 Test Bench 117Maste

    Bitplane image coding with parallel coefficient processing

    Get PDF
    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible

    GPU-oriented architecture for an end-to-end image/video codec based on JPEG2000

    Get PDF
    Modern image and video compression standards employ computationally intensive algorithms that provide advanced features to the coding system. Current standards often need to be implemented in hardware or using expensive solutions to meet the real-time requirements of some environments. Contrarily to this trend, this paper proposes an end-to-end codec architecture running on inexpensive Graphics Processing Units (GPUs) that is based on, though not compatible with, the JPEG2000 international standard for image and video compression. When executed in a commodity Nvidia GPU, it achieves real time processing of 12K video. The proposed S/W architecture utilizes four CUDA kernels that minimize memory transfers, use registers instead of shared memory, and employ a double-buffer strategy to optimize the streaming of data. The analysis of throughput indicates that the proposed codec yields results at least 10× superior on average to those achieved with JPEG2000 implementations devised for CPUs, and approximately 4× superior to those achieved with hardwired solutions of the HEVC/H.265 video compression standard

    Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions

    Get PDF
    In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal, 201

    A 249-Mpixel/s HEVC Video-Decoder Chip for 4K Ultra-HD Applications

    Get PDF
    High Efficiency Video Coding, the latest video standard, uses larger and variable-sized coding units and longer interpolation filters than [H.264 over AVC] to better exploit redundancy in video signals. These algorithmic techniques enable a 50% decrease in bitrate at the cost of computational complexity, external memory bandwidth, and, for ASIC implementations, on-chip SRAM of the video codec. This paper describes architectural optimizations for an HEVC video decoder chip. The chip uses a two-stage subpipelining scheme to reduce on-chip SRAM by 56 kbytes-a 32% reduction. A high-throughput read-only cache combined with DRAM-latency-aware memory mapping reduces DRAM bandwidth by 67%. The chip is built for HEVC Working Draft 4 Low Complexity configuration and occupies 1.77 mm[superscript 2] in 40-nm CMOS. It performs 4K Ultra HD 30-fps video decoding at 200 MHz while consuming 1.19 [nJ over pixel] of normalized system power.Texas Instruments Incorporate

    Virtualized Reconfigurable Resources and Their Secured Provision in an Untrusted Cloud Environment

    Get PDF
    The cloud computing business grows year after year. To keep up with increasing demand and to offer more services, data center providers are always searching for novel architectures. One of them are FPGAs, reconfigurable hardware with high compute power and energy efficiency. But some clients cannot make use of the remote processing capabilities. Not every involved party is trustworthy and the complex management software has potential security flaws. Hence, clients’ sensitive data or algorithms cannot be sufficiently protected. In this thesis state-of-the-art hardware, cloud and security concepts are analyzed and com- bined. On one side are reconfigurable virtual FPGAs. They are a flexible resource and fulfill the cloud characteristics at the price of security. But on the other side is a strong requirement for said security. To provide it, an immutable controller is embedded enabling a direct, confidential and secure transfer of clients’ configurations. This establishes a trustworthy compute space inside an untrusted cloud environment. Clients can securely transfer their sensitive data and algorithms without involving vulnerable software or a data center provider. This concept is implemented as a prototype. Based on it, necessary changes to current FPGAs are analyzed. To fully enable reconfigurable yet secure hardware in the cloud, a new hybrid architecture is required.Das Geschäft mit dem Cloud Computing wächst Jahr für Jahr. Um mit der steigenden Nachfrage mitzuhalten und neue Angebote zu bieten, sind Betreiber von Rechenzentren immer auf der Suche nach neuen Architekturen. Eine davon sind FPGAs, rekonfigurierbare Hardware mit hoher Rechenleistung und Energieeffizienz. Aber manche Kunden können die ausgelagerten Rechenkapazitäten nicht nutzen. Nicht alle Beteiligten sind vertrauenswürdig und die komplexe Verwaltungssoftware ist anfällig für Sicherheitslücken. Daher können die sensiblen Daten dieser Kunden nicht ausreichend geschützt werden. In dieser Arbeit werden modernste Hardware, Cloud und Sicherheitskonzept analysiert und kombiniert. Auf der einen Seite sind virtuelle FPGAs. Sie sind eine flexible Ressource und haben Cloud Charakteristiken zum Preis der Sicherheit. Aber auf der anderen Seite steht ein hohes Sicherheitsbedürfnis. Um dieses zu bieten ist ein unveränderlicher Controller eingebettet und ermöglicht eine direkte, vertrauliche und sichere Übertragung der Konfigurationen der Kunden. Das etabliert eine vertrauenswürdige Rechenumgebung in einer nicht vertrauenswürdigen Cloud Umgebung. Kunden können sicher ihre sensiblen Daten und Algorithmen übertragen ohne verwundbare Software zu nutzen oder den Betreiber des Rechenzentrums einzubeziehen. Dieses Konzept ist als Prototyp implementiert. Darauf basierend werden nötige Änderungen von modernen FPGAs analysiert. Um in vollem Umfang eine rekonfigurierbare aber dennoch sichere Hardware in der Cloud zu ermöglichen, wird eine neue hybride Architektur benötigt

    Performance evaluation of H.264/AVC decoding and visualization using the GPU

    Get PDF
    The coding efficiency of the H.264/AVC standard makes the decoding process computationally demanding. This has limited the availability of cost-effective, high-performance solutions. Modern computers are typically equipped with powerful yet cost-effective Graphics Processing Units (GPUs) to accelerate graphics operations. These GPUs can be addressed by means of a 3-D graphics API such as Microsoft Direct3D or OpenGL, using programmable shaders as generic processing units for vector data. The new CUDA (Compute Unified Device Architecture) platform of NVIDIA provides a straightforward way to address the GPU directly, without the need for a 3-D graphics API in the middle. In CUDA, a compiler generates executable code from C code with specific modifiers that determine the execution model. This paper first presents an own-developed H.264/AVC renderer, which is capable of executing motion compensation (MC), reconstruction, and Color Space Conversion (CSC) entirely on the GPU. To steer the GPU, Direct3D combined with programmable pixel and vertex shaders is used. Next, we also present a GPU-enabled decoder utilizing the new CUDA architecture from NVIDIA. This decoder performs MC, reconstruction, and CSC on the GPU as well. Our results compare both GPU-enabled decoders, as well as a CPU-only decoder in terms of speed, complexity, and CPU requirements. Our measurements show that a significant speedup is possible, relative to a CPU-only solution. As an example, real-time playback of high-definition video (1080p) was achieved with our Direct3D and CUDA-based H.264/AVC renderers
    corecore