2,472 research outputs found

    Towards flexible hardware/software encoding using H.264

    Get PDF
    As the electronics world continues to expand, bringing smaller and more portable devices to consumers, demands for media access continue to rise. Consumers are seeking the ability to view the wealth of information available on the Internet from devices such as smart phones, tablets, and music players. In addition to Internet browsing, smart phones and tablets in particular look to reinvent phone communication by adding video chat through services such as Skype and FaceTime. Bringing video to mobile platforms requires trade-offs between size, channel capacity, hardware cost, quality, loading times and power consumption. H.264, the current standard for video encoding specifies multiple profiles to support different modes of operation and environments. Creating an H.264 video encoder for a mobile platform requires a proper balance between the aforementioned trade-offs while maintaining flexibility in a real time environment such as video chatting. The goal of this thesis was to investigate the trade-offs of implementing the H.264 Baseline encoding process specifically at low bit rates in hardware and software using Field Programmable Gate Array (FPGA) reconfigurable resources with an embedded processor core on the same chip. To further preserve encoding flexibility, existing encoding parameters were left intact. The Joint Model (JM) Reference encoder modified to include only the Baseline Profile was used as an initial reference point to evaluate the efficacy of the finished encoder. To improve upon the initial software implementation, major software bottlenecks were identified and hardware accelerators were designed aimed at producing a speedup capable of encoding 176x144 or Quarter Common Intermediate Format (QCIF) videos in real-time at 24 Frames Per Second (FPS) or greater. Finally, the hardware/software implementation was analyzed in comparison with the original JM Reference software encoder. This analysis included FPS, bit rate, encoding time, luminance Peak Signal-to-Noise Ratio (Y-PSNR) and associated hardware costs

    Side information exploitation, quality control and low complexity implementation for distributed video coding

    Get PDF
    Distributed video coding (DVC) is a new video coding methodology that shifts the highly complex motion search components from the encoder to the decoder, such a video coder would have a great advantage in encoding speed and it is still able to achieve similar rate-distortion performance as the conventional coding solutions. Applications include wireless video sensor networks, mobile video cameras and wireless video surveillance, etc. Although many progresses have been made in DVC over the past ten years, there is still a gap in RD performance between conventional video coding solutions and DVC. The latest development of DVC is still far from standardization and practical use. The key problems remain in the areas such as accurate and efficient side information generation and refinement, quality control between Wyner-Ziv frames and key frames, correlation noise modelling and decoder complexity, etc. Under this context, this thesis proposes solutions to improve the state-of-the-art side information refinement schemes, enable consistent quality control over decoded frames during coding process and implement highly efficient DVC codec. This thesis investigates the impact of reference frames on side information generation and reveals that reference frames have the potential to be better side information than the extensively used interpolated frames. Based on this investigation, we also propose a motion range prediction (MRP) method to exploit reference frames and precisely guide the statistical motion learning process. Extensive simulation results show that choosing reference frames as SI performs competitively, and sometimes even better than interpolated frames. Furthermore, the proposed MRP method is shown to significantly reduce the decoding complexity without degrading any RD performance. To minimize the block artifacts and achieve consistent improvement in both subjective and objective quality of side information, we propose a novel side information synthesis framework working on pixel granularity. We synthesize the SI at pixel level to minimize the block artifacts and adaptively change the correlation noise model according to the new SI. Furthermore, we have fully implemented a state-of-the-art DVC decoder with the proposed framework using serial and parallel processing technologies to identify bottlenecks and areas to further reduce the decoding complexity, which is another major challenge for future practical DVC system deployments. The performance is evaluated based on the latest transform domain DVC codec and compared with different standard codecs. Extensive experimental results show substantial and consistent rate-distortion gains over standard video codecs and significant speedup over serial implementation. In order to bring the state-of-the-art DVC one step closer to practical use, we address the problem of distortion variation introduced by typical rate control algorithms, especially in a variable bit rate environment. Simulation results show that the proposed quality control algorithm is capable to meet user defined target distortion and maintain a rather small variation for sequence with slow motion and performs similar to fixed quantization for fast motion sequence at the cost of some RD performance. Finally, we propose the first implementation of a distributed video encoder on a Texas Instruments TMS320DM6437 digital signal processor. The WZ encoder is efficiently implemented, using rate adaptive low-density-parity-check accumulative (LDPCA) codes, exploiting the hardware features and optimization techniques to improve the overall performance. Implementation results show that the WZ encoder is able to encode at 134M instruction cycles per QCIF frame on a TMS320DM6437 DSP running at 700MHz. This results in encoder speed 29 times faster than non-optimized encoder implementation. We also implemented a highly efficient DVC decoder using both serial and parallel technology based on a PC-HPC (high performance cluster) architecture, where the encoder is running in a general purpose PC and the decoder is running in a multicore HPC. The experimental results show that the parallelized decoder can achieve about 10 times speedup under various bit-rates and GOP sizes compared to the serial implementation and significant RD gains with regards to the state-of-the-art DISCOVER codec

    Implementing video compression algorithms on reconfigurable devices

    Get PDF
    The increasing density offered by Field Programmable Gate Arrays(FPGA), coupled with their short design cycle, has made them a popular choice for implementing a wide range of algorithms and complete systems. In this thesis the implementation of video compression algorithms on FPGAs is studied. Two areas are specifically focused on; the integration of a video encoder into a complete system and the power consumption of FPGA based video encoders. Two FPGA based video compression systems are described, one which targets surveillance applications and one which targets video conferencing applications. The FPGA video surveillance system makes use of a novel memory format to improve the efficiency with which input video sequences can be loaded over the system bus. The power consumption of a FPGA video encoder is analyzed. The results indicating that the motion estimation encoder stage requires the most power consumption. An algorithm, which reuses the intra prediction results generated during the encoding process, is then proposed to reduce the power consumed on an FPGA video encoder’s external memory bus. Finally, the power reduction algorithm is implemented within an FPGA video encoder. Results are given showing that, in addition to reducing power on the external memory bus, the algorithm also reduces power in the motion estimation stage of a FPGA based video encoder

    Parallel H.264/AVC Fast Rate-Distortion Optimized Motion Estimation using Graphics Processing Unit and Dedicated Hardware

    Get PDF
    Heterogeneous systems on a single chip composed of CPU, Graphical Processing Unit (GPU), and Field Programmable Gate Array (FPGA) are expected to emerge in near future. In this context, the System on Chip (SoC) can be dynamically adapted to employ different architectures for execution of data-intensive applications. Motion estimation is one such task that can be accelerated using FPGA and GPU for high performance H.264/AVC encoder implementation. In most of works on parallel implementation of motion estimation, the bit rate cost of motion vectors is generally ignored. On the contrary, this paper presents a fast rate-distortion optimized parallel motion estimation algorithm implemented on GPU using OpenCL and FPGA/ASIC using VHDL. The predicted motion vectors are estimated from temporally preceding motion vectors and used for evaluating the bit rate cost of the motion vectors simultaneously. The experimental results show that the proposed scheme achieves significant speedup on GPU and FPGA, and has comparable ratedistortion performance with respect to sequential fast motion estimation algorithm

    Variable block size motion estimation hardware for video encoders.

    Get PDF
    Li, Man Ho.Thesis submitted in: November 2006.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 137-143).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.3Chapter 1.2 --- The objectives of this thesis --- p.4Chapter 1.3 --- Contributions --- p.5Chapter 1.4 --- Thesis structure --- p.6Chapter 2 --- Digital video compression --- p.8Chapter 2.1 --- Introduction --- p.8Chapter 2.2 --- Fundamentals of lossy video compression --- p.9Chapter 2.2.1 --- Video compression and human visual systems --- p.10Chapter 2.2.2 --- Representation of color --- p.10Chapter 2.2.3 --- Sampling methods - frames and fields --- p.11Chapter 2.2.4 --- Compression methods --- p.11Chapter 2.2.5 --- Motion estimation --- p.12Chapter 2.2.6 --- Motion compensation --- p.13Chapter 2.2.7 --- Transform --- p.13Chapter 2.2.8 --- Quantization --- p.14Chapter 2.2.9 --- Entropy Encoding --- p.14Chapter 2.2.10 --- Intra-prediction unit --- p.14Chapter 2.2.11 --- Deblocking filter --- p.15Chapter 2.2.12 --- Complexity analysis of on different com- pression stages --- p.16Chapter 2.3 --- Motion estimation process --- p.16Chapter 2.3.1 --- Block-based matching method --- p.16Chapter 2.3.2 --- Motion estimation procedure --- p.18Chapter 2.3.3 --- Matching Criteria --- p.19Chapter 2.3.4 --- Motion vectors --- p.21Chapter 2.3.5 --- Quality judgment --- p.22Chapter 2.4 --- Block-based matching algorithms for motion estimation --- p.23Chapter 2.4.1 --- Full search (FS) --- p.23Chapter 2.4.2 --- Three-step search (TSS) --- p.24Chapter 2.4.3 --- Two-dimensional Logarithmic Search Algorithm (2D-log search) --- p.25Chapter 2.4.4 --- Diamond Search (DS) --- p.25Chapter 2.4.5 --- Fast full search (FFS) --- p.26Chapter 2.5 --- Complexity analysis of motion estimation --- p.27Chapter 2.5.1 --- Different searching algorithms --- p.28Chapter 2.5.2 --- Fixed-block size motion estimation --- p.28Chapter 2.5.3 --- Variable block size motion estimation --- p.29Chapter 2.5.4 --- Sub-pixel motion estimation --- p.30Chapter 2.5.5 --- Multi-reference frame motion estimation . --- p.30Chapter 2.6 --- Picture quality analysis --- p.31Chapter 2.7 --- Summary --- p.32Chapter 3 --- Arithmetic for video encoding --- p.33Chapter 3.1 --- Introduction --- p.33Chapter 3.2 --- Number systems --- p.34Chapter 3.2.1 --- Non-redundant Number System --- p.34Chapter 3.2.2 --- Redundant number system --- p.36Chapter 3.3 --- Addition/subtraction algorithm --- p.38Chapter 3.3.1 --- Non-redundant number addition --- p.39Chapter 3.3.2 --- Carry-save number addition --- p.39Chapter 3.3.3 --- Signed-digit number addition --- p.40Chapter 3.4 --- Bit-serial algorithms --- p.42Chapter 3.4.1 --- Least-significant-bit (LSB) first mode --- p.42Chapter 3.4.2 --- Most-significant-bit (MSB) first mode --- p.43Chapter 3.5 --- Absolute difference algorithm --- p.44Chapter 3.5.1 --- Non-redundant algorithm for absolute difference --- p.44Chapter 3.5.2 --- Redundant algorithm for absolute difference --- p.45Chapter 3.6 --- Multi-operand addition algorithm --- p.47Chapter 3.6.1 --- Bit-parallel non-redundant adder tree implementation --- p.47Chapter 3.6.2 --- Bit-parallel carry-save adder tree implementation --- p.49Chapter 3.6.3 --- Bit serial signed digit adder tree implementation --- p.49Chapter 3.7 --- Comparison algorithms --- p.50Chapter 3.7.1 --- Non-redundant comparison algorithm --- p.51Chapter 3.7.2 --- Signed-digit comparison algorithm --- p.52Chapter 3.8 --- Summary --- p.53Chapter 4 --- VLSI architectures for video encoding --- p.54Chapter 4.1 --- Introduction --- p.54Chapter 4.2 --- Implementation platform - (FPGA) --- p.55Chapter 4.2.1 --- Basic FPGA architecture --- p.55Chapter 4.2.2 --- DSP blocks in FPGA device --- p.56Chapter 4.2.3 --- Advantages employing FPGA --- p.57Chapter 4.2.4 --- Commercial FPGA Device --- p.58Chapter 4.3 --- Top level architecture of motion estimation processor --- p.59Chapter 4.4 --- Bit-parallel architectures for motion estimation --- p.60Chapter 4.4.1 --- Systolic arrays --- p.60Chapter 4.4.2 --- Mapping of a motion estimation algorithm onto systolic array --- p.61Chapter 4.4.3 --- 1-D systolic array architecture (LA-ID) --- p.63Chapter 4.4.4 --- 2-D systolic array architecture (LA-2D) --- p.64Chapter 4.4.5 --- 1-D Tree architecture (GA-1D) --- p.64Chapter 4.4.6 --- 2-D Tree architecture (GA-2D) --- p.65Chapter 4.4.7 --- Variable block size support in bit-parallel architectures --- p.66Chapter 4.5 --- Bit-serial motion estimation architecture --- p.68Chapter 4.5.1 --- Data Processing Direction --- p.68Chapter 4.5.2 --- Algorithm mapping and dataflow design . --- p.68Chapter 4.5.3 --- Early termination scheme --- p.69Chapter 4.5.4 --- Top-level architecture --- p.70Chapter 4.5.5 --- Non redundant positive number to signed digit conversion --- p.71Chapter 4.5.6 --- Signed-digit adder tree --- p.73Chapter 4.5.7 --- SAD merger --- p.74Chapter 4.5.8 --- Signed-digit comparator --- p.75Chapter 4.5.9 --- Early termination controller --- p.76Chapter 4.5.10 --- Data scheduling and timeline --- p.80Chapter 4.6 --- Decision metric in different architectural types . . --- p.80Chapter 4.6.1 --- Throughput --- p.81Chapter 4.6.2 --- Memory bandwidth --- p.83Chapter 4.6.3 --- Silicon area occupied and power consump- tion --- p.83Chapter 4.7 --- Architecture selection for different applications . . --- p.84Chapter 4.7.1 --- CIF and QCIF resolution --- p.84Chapter 4.7.2 --- SDTV resolution --- p.85Chapter 4.7.3 --- HDTV resolution --- p.85Chapter 4.8 --- Summary --- p.86Chapter 5 --- Results and comparison --- p.87Chapter 5.1 --- Introduction --- p.87Chapter 5.2 --- Implementation details --- p.87Chapter 5.2.1 --- Bit-parallel 1-D systolic array --- p.88Chapter 5.2.2 --- Bit-parallel 2-D systolic array --- p.89Chapter 5.2.3 --- Bit-parallel Tree architecture --- p.90Chapter 5.2.4 --- MSB-first bit-serial design --- p.91Chapter 5.3 --- Comparison between motion estimation architectures --- p.93Chapter 5.3.1 --- Throughput and latency --- p.93Chapter 5.3.2 --- Occupied resources --- p.94Chapter 5.3.3 --- Memory bandwidth --- p.95Chapter 5.3.4 --- Motion estimation algorithm --- p.95Chapter 5.3.5 --- Power consumption --- p.97Chapter 5.4 --- Comparison to ASIC and FPGA architectures in past literature --- p.99Chapter 5.5 --- Summary --- p.101Chapter 6 --- Conclusion --- p.102Chapter 6.1 --- Summary --- p.102Chapter 6.1.1 --- Algorithmic optimizations --- p.102Chapter 6.1.2 --- Architecture and arithmetic optimizations --- p.103Chapter 6.1.3 --- Implementation on a FPGA platform . . . --- p.104Chapter 6.2 --- Future work --- p.106Chapter A --- VHDL Sources --- p.108Chapter A.1 --- Online Full Adder --- p.108Chapter A.2 --- Online Signed Digit Full Adder --- p.109Chapter A.3 --- Online Pull Adder Tree --- p.110Chapter A.4 --- SAD merger --- p.112Chapter A.5 --- Signed digit adder tree stage (top) --- p.116Chapter A.6 --- Absolute element --- p.118Chapter A.7 --- Absolute stage (top) --- p.119Chapter A.8 --- Online comparator element --- p.120Chapter A.9 --- Comparator stage (top) --- p.122Chapter A.10 --- MSB-first motion estimation processor --- p.134Bibliography --- p.13

    Algorithms & implementation of advanced video coding standards

    Get PDF
    Advanced video coding standards have become widely deployed coding techniques used in numerous products, such as broadcast, video conference, mobile television and blu-ray disc, etc. New compression techniques are gradually included in video coding standards so that a 50% compression rate reduction is achievable every five years. However, the trend also has brought many problems, such as, dramatically increased computational complexity, co-existing multiple standards and gradually increased development time. To solve the above problems, this thesis intends to investigate efficient algorithms for the latest video coding standard, H.264/AVC. Two aspects of H.264/AVC standard are inspected in this thesis: (1) Speeding up intra4x4 prediction with parallel architecture. (2) Applying an efficient rate control algorithm based on deviation measure to intra frame. Another aim of this thesis is to work on low-complexity algorithms for MPEG-2 to H.264/AVC transcoder. Three main mapping algorithms and a computational complexity reduction algorithm are focused by this thesis: motion vector mapping, block mapping, field-frame mapping and efficient modes ranking algorithms. Finally, a new video coding framework methodology to reduce development time is examined. This thesis explores the implementation of MPEG-4 simple profile with the RVC framework. A key technique of automatically generating variable length decoder table is solved in this thesis. Moreover, another important video coding standard, DV/DVCPRO, is further modeled by RVC framework. Consequently, besides the available MPEG-4 simple profile and China audio/video standard, a new member is therefore added into the RVC framework family. A part of the research work presented in this thesis is targeted algorithms and implementation of video coding standards. In the wide topic, three main problems are investigated. The results show that the methodologies presented in this thesis are efficient and encourage

    Algorithms and Hardware Co-Design of HEVC Intra Encoders

    Get PDF
    Digital video is becoming extremely important nowadays and its importance has greatly increased in the last two decades. Due to the rapid development of information and communication technologies, the demand for Ultra-High Definition (UHD) video applications is becoming stronger. However, the most prevalent video compression standard H.264/AVC released in 2003 is inefficient when it comes to UHD videos. The increasing desire for superior compression efficiency to H.264/AVC leads to the standardization of High Efficiency Video Coding (HEVC). Compared with the H.264/AVC standard, HEVC offers a double compression ratio at the same level of video quality or substantial improvement of video quality at the same video bitrate. Yet, HE-VC/H.265 possesses superior compression efficiency, its complexity is several times more than H.264/AVC, impeding its high throughput implementation. Currently, most of the researchers have focused merely on algorithm level adaptations of HEVC/H.265 standard to reduce computational intensity without considering the hardware feasibility. What’s more, the exploration of efficient hardware architecture design is not exhaustive. Only a few research works have been conducted to explore efficient hardware architectures of HEVC/H.265 standard. In this dissertation, we investigate efficient algorithm adaptations and hardware architecture design of HEVC intra encoders. We also explore the deep learning approach in mode prediction. From the algorithm point of view, we propose three efficient hardware-oriented algorithm adaptations, including mode reduction, fast coding unit (CU) cost estimation, and group-based CABAC (context-adaptive binary arithmetic coding) rate estimation. Mode reduction aims to reduce mode candidates of each prediction unit (PU) in the rate-distortion optimization (RDO) process, which is both computation-intensive and time-consuming. Fast CU cost estimation is applied to reduce the complexity in rate-distortion (RD) calculation of each CU. Group-based CABAC rate estimation is proposed to parallelize syntax elements processing to greatly improve rate estimation throughput. From the hardware design perspective, a fully parallel hardware architecture of HEVC intra encoder is developed to sustain UHD video compression at 4K@30fps. The fully parallel architecture introduces four prediction engines (PE) and each PE performs the full cycle of mode prediction, transform, quantization, inverse quantization, inverse transform, reconstruction, rate-distortion estimation independently. PU blocks with different PU sizes will be processed by the different prediction engines (PE) simultaneously. Also, an efficient hardware implementation of a group-based CABAC rate estimator is incorporated into the proposed HEVC intra encoder for accurate and high-throughput rate estimation. To take advantage of the deep learning approach, we also propose a fully connected layer based neural network (FCLNN) mode preselection scheme to reduce the number of RDO modes of luma prediction blocks. All angular prediction modes are classified into 7 prediction groups. Each group contains 3-5 prediction modes that exhibit a similar prediction angle. A rough angle detection algorithm is designed to determine the prediction direction of the current block, then a small scale FCLNN is exploited to refine the mode prediction

    A baseline h.264 video encoder hardware design

    Get PDF
    The recently developed H.264 / MPEG-4 Part 10 video compression standard achieves better video compression efficiency than previous video compression standards at the expense of increased computational complexity and power consumption. Multiple reference frame (MRF) Motion Estimation (ME) is the most computationally intensive and power consuming part of H.264 video encoders. Therefore, in this thesis, we designed and implemented a reconfigurable baseline H.264 video encoder hardware for real-time portable applications in which the number of reference frames used for MRF ME can be configured based on the application requirements in order to trade-off video coding efficiency and power consumption. The proposed H.264 video encoder hardware is based on an existing low cost H.264 intra frame coder hardware and it includes new reconfigurable MRF ME, mode decision and motion compensation hardware. We first proposed a low complexity H.264 MRF ME algorithm and a low energy adaptive hardware for its real-time implementation. The proposed MRF ME algorithm reduces the computational complexity of MRF ME by using a dynamically determined number of reference frames for each Macroblock and early termination. The proposed MRF ME hardware architecture is implemented in Verilog HDL and mapped to a Xilinx Spartan 6 FPGA. The FPGA implementation is verified with post place & route simulations. The proposed H.264 MRF ME hardware has 29-72% less energy consumption on this FPGA than an H.264 MRF ME hardware using 5 reference frames for all MBs with a negligible PSNR loss. We then designed the H.264 video encoder hardware and implemented it in Verilog HDL. The proposed video encoder hardware is mapped to a Xilinx Virtex 6 FPGA and verified with post place & route simulations. The bitstream generated by the proposed video encoder hardware for an input frame is successfully decoded by H.264 Joint Model reference software decoder and the decoded frame is displayed using a YUV Player tool for visual verification. The FPGA implementation of the proposed H.264 video encoder hardware works at 135 MHz, it can code 55 CIF (352x288) frames per second, and its power consumption ranges between 115mW and 235mW depending on the number of reference frames used for MRF ME
    corecore