186 research outputs found

    High-Efficient Parallel CAVLC Encoders on Heterogeneous Multicore Architectures

    Get PDF
    This article presents two high-efficient parallel realizations of the context-based adaptive variable length coding (CAVLC) based on heterogeneous multicore processors. By optimizing the architecture of the CAVLC encoder, three kinds of dependences are eliminated or weaken, including the context-based data dependence, the memory accessing dependence and the control dependence. The CAVLC pipeline is divided into three stages: two scans, coding, and lag packing, and be implemented on two typical heterogeneous multicore architectures. One is a block-based SIMD parallel CAVLC encoder on multicore stream processor STORM. The other is a component-oriented SIMT parallel encoder on massively parallel architecture GPU. Both of them exploited rich data-level parallelism. Experiments results show that compared with the CPU version, more than 70 times of speedup can be obtained for STORM and over 50 times for GPU. The implementation of encoder on STORM can make a real-time processing for 1080p @30fps and GPU-based version can satisfy the requirements for 720p real-time encoding. The throughput of the presented CAVLC encoders is more than 10 times higher than that of published software encoders on DSP and multicore platforms

    Motion estimation and CABAC VLSI co-processors for real-time high-quality H.264/AVC video coding

    Get PDF
    Real-time and high-quality video coding is gaining a wide interest in the research and industrial community for different applications. H.264/AVC, a recent standard for high performance video coding, can be successfully exploited in several scenarios including digital video broadcasting, high-definition TV and DVD-based systems, which require to sustain up to tens of Mbits/s. To that purpose this paper proposes optimized architectures for H.264/AVC most critical tasks, Motion estimation and context adaptive binary arithmetic coding. Post synthesis results on sub-micron CMOS standard-cells technologies show that the proposed architectures can actually process in real-time 720 × 480 video sequences at 30 frames/s and grant more than 50 Mbits/s. The achieved circuit complexity and power consumption budgets are suitable for their integration in complex VLSI multimedia systems based either on AHB bus centric on-chip communication system or on novel Network-on-Chip (NoC) infrastructures for MPSoC (Multi-Processor System on Chip

    Hardware study on the H.264/AVC video stream parser

    Get PDF
    The video standard H.264/AVC is the latest standard jointly developed in 2003 by the ITUT Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). It is an improvement over previous standards, such as MPEG-1 and MPEG-2, as it aims to be efficient for a wide range of applications and resolutions, including high definition broadcast television and video for mobile devices. Due to the standardization of the formatted bit stream and video decoder many more applications can take advantage of the abstraction this standard provides by implementing a desired video encoder and simply adhering to the bit stream constraints. The increase in application flexibility and variable resolution support results in the need for more sophisticated decoder implementations and hardware designs become a necessity. It is desirable to consider architectures that focus on the first stage of the video decoding process, where all data and parameter information are recovered, to understand how influential the initial step is to the decoding process and how influential various targeting platforms can be. The focus of this thesis is to study the differences between targeting an original video stream parser architecture for a 65nm ASIC (Application Specific Integrated Circuit), as well as an FPGA (Field Programmable Gate Array). Previous works have concentrated on designing parts of the parser and using numerous platforms; however, the comparison of a single architecture targeting different platforms could lead to further insight into the video stream parser. Overall, the ASIC implementations showed higher performance and lower area than the FPGA, with a 60% increase in performance and 6x decrease in area. The results also show the presented design to be a low power architecture, when compared to other research

    A DSP Based H.264 Decoder for a Multi-Format IP Set-Top Box

    Get PDF
    In this paper, the implementation of a digital signal processor (DSP) based H.264 decoder for a multi-format set-top box is described. Baseline and main profiles are supported. Using several software optimization techniques, the decoder has been fitted into a low-cost DSP. The decoder alone has been tested in simulation, achieving real-time performance with a 600 MHz system clock. Moreover, it has been integrated in a multi-format IP set-top box allowing the implementation of actual environment tests with excellent results. Finally, the decoder has been ported to a latest generation DSP

    Hardware Software Synthesis of a H.264 / AVC Baseline Profile Decoder

    Get PDF
    The latest video compression standard is a joint effort between ITU and MPEG known as H.264/AVC. As with any video compression standard the H.264/AVC uses computationally intensive algorithms to maximize performance. During decompression these algorithms must be applied in real-time, processing 30 frames a second. This can be done in software, specialized hardware, or a combination of the two. Software solutions allow for maximum portability and ease of design, but General Purpose Processors (GPP) can not take full advantage of the parallelizable algorithms that the H.264 decoder is based upon. Specialized hardware solutions, on the other hand, allow concurrent data and instruction paths, but do not offer a high level of abstraction for cross platform development. Recent work by Xilinx has resulted in the advent of the MicroBlaze soft-processor that is a stand alone microcontroller built from an FPGA. The MicroBlaze provides a specialized hardware medium to run software on-chip with VHDL entities. The goal of this thesis was to model and simulate a software hardware hybrid H.264/AVC Baseline Profile decoder using VHDL and a soft-processor. It was proposed to divide all highly sequential calculations (run-length and CALVC decoding) and control data flow into software and perform the remaining calculations (prediction, inverse transform, inverse quantization, etc.) in hardware modules. The software runs on Xilinx\u27 s MicroBlaze soft-processor and the hardware was designed using VHDL. A major advantage of soft-processors over GPP\u27s, is that it hardware instantiations reside on-chip with the processor. The software and MicroBlaze soft-processor were simulated in a test bench and the results proved that the MicroBlaze could not handle the encoded bit-stream in real-time. For this reason the hardware interface and hardware decoder were never fully implemented. The scope of the thesis covers the H.264 Baseline Profile standard, MicroBlaze processor, the implemented software solution, and the proposed hardware counterpart

    CAL Dataflow Components for an MPEG RVC AVC Baseline Encoder

    Get PDF
    In this paper, an efficient H.264/AVC baseline encoder, described in RVC-CAL actor language, is introduced. The main aim of the paper is twofold: a) to demonstrate the flexibility and ease that is provided by RVC-CAL, which allows for efficient implementation of the presented encoder, and b) to shed light on the advantages that can be brought into the RVC framework by including such encoding tools. The main modules of the designed encoder include: Inter Frame Prediction (Motion Estimation/Compensation), Intra Frame Prediction, and Entropy Coding. Descriptions of the designed modules, accompanied with RVC-CAL design issues are provided. A comparison between different development approaches is also provided. The obtained results show that specifying complex video codecs (e.g. H.264/AVC encoder) using RVC-CAL followed by automatic translation into HDL, which is achievable by the tools that support the standard, results in more efficient HW implementation compared to the traditional HW design flow. A discussion that explains the reasons behind such results concludes the pape

    Efficient reconfigurable architectures for 3D medical image compression

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In these fields, medical image compression is important since both efficient storage and transmission of data through high-bandwidth digital communication lines are of crucial importance. Despite their advantages, most 3-D medical imaging algorithms are computationally intensive with matrix transformation as the most fundamental operation involved in the transform-based methods. Therefore, there is a real need for high-performance systems, whilst keeping architectures exible to allow for quick upgradeability with real-time applications. Moreover, in order to obtain efficient solutions for large medical volumes data, an efficient implementation of these operations is of significant importance. Reconfigurable hardware, in the form of field programmable gate arrays (FPGAs) has been proposed as viable system building block in the construction of high-performance systems at an economical price. Consequently, FPGAs seem an ideal candidate to harness and exploit their inherent advantages such as massive parallelism capabilities, multimillion gate counts, and special low-power packages. The key achievements of the work presented in this thesis are summarised as follows. Two architectures for 3-D Haar wavelet transform (HWT) have been proposed based on transpose-based computation and partial reconfiguration suitable for 3-D medical imaging applications. These applications require continuous hardware servicing, and as a result dynamic partial reconfiguration (DPR) has been introduced. Comparative study for both non-partial and partial reconfiguration implementation has shown that DPR offers many advantages and leads to a compelling solution for implementing computationally intensive applications such as 3-D medical image compression. Using DPR, several large systems are mapped to small hardware resources, and the area, power consumption as well as maximum frequency are optimised and improved. Moreover, an FPGA-based architecture of the finite Radon transform (FRAT)with three design strategies has been proposed: direct implementation of pseudo-code with a sequential or pipelined description, and block random access memory (BRAM)- based method. An analysis with various medical imaging modalities has been carried out. Results obtained for image de-noising implementation using FRAT exhibits promising results in reducing Gaussian white noise in medical images. In terms of hardware implementation, promising trade-offs on maximum frequency, throughput and area are also achieved. Furthermore, a novel hardware implementation of 3-D medical image compression system with context-based adaptive variable length coding (CAVLC) has been proposed. An evaluation of the 3-D integer transform (IT) and the discrete wavelet transform (DWT) with lifting scheme (LS) for transform blocks reveal that 3-D IT demonstrates better computational complexity than the 3-D DWT, whilst the 3-D DWT with LS exhibits a lossless compression that is significantly useful for medical image compression. Additionally, an architecture of CAVLC that is capable of compressing high-definition (HD) images in real-time without any buffer between the quantiser and the entropy coder is proposed. Through a judicious parallelisation, promising results have been obtained with limited resources. In summary, this research is tackling the issues of massive 3-D medical volumes data that requires compression as well as hardware implementation to accelerate the slowest operations in the system. Results obtained also reveal a significant achievement in terms of the architecture efficiency and applications performance.Ministry of Higher Education Malaysia (MOHE), Universiti Tun Hussein Onn Malaysia (UTHM) and the British Counci

    Seminario sullo Standard MPEG-4: utilizzo ed aspetti implementativi

    Get PDF
    Una delle tecnologie chiave che hanno permesso il grande sviluppo della televisione digitale è la compressione video. La tecnologia di codifica video nota come MPEG-2, sviluppata nei primi anni novanta, è diventata lo standard di trasmissione DTV (Digital TV) sia satellitare sia terrestre in quasi tutti i paesi del mondo. Da allora la velocità dei microprocessori e le capacità di memoria dei dispositivi hardware per la codifica e la decodifica sono migliorate significativamente rendendo possibile lo sviluppo e l’implementazione di algoritmi di codifica innovativi in grado di abbattere significativamente i limiti di compressione dello standard MPEG-2. Tali innovazioni, sfociate nel 2003 nello standard MPEG-4 AVC (Advanced Video Coding), non hanno permesso di mantenere la compatibilità all’indietro con l’MPEG-2, e questo ha inizialmente costituito un limite alla loro introduzione nei sistemi di trasmissione DTV. Tuttavia, negli ultimi anni la codifica MPEG-4 AVC si è diffusa rapidamente, è stata adottata dal progetto DVB, recentemente dall’ATSC, ed è lo standard di codifica nell’IPTV. L’obiettivo di questo seminario, che si articola in due giornate, è quello di presentare lo standard di codifica MPEG-4 AVC con particolare attenzione agli aspetti implementativi del livello di codifica video.2008-11-18Sardegna Ricerche, Edificio 2, Località Piscinamanna 09010 Pula (CA) - ItaliaSeminario sullo Standard MPEG-4: utilizzo ed aspetti implementativ

    Optimization of scientific algorithms in heterogeneous systems and accelerators for high performance computing

    Get PDF
    Actualmente, la computación de propósito general en GPU es uno de los pilares básicos de la computación de alto rendimiento. Aunque existen cientos de aplicaciones aceleradas en GPU, aún hay algoritmos científicos poco estudiados. Por ello, la motivación de esta tesis ha sido investigar la posibilidad de acelerar significativamente en GPU un conjunto de algoritmos pertenecientes a este grupo. En primer lugar, se ha obtenido una implementación optimizada del algoritmo de compresión de vídeo e imagen CAVLC (Context-Adaptive Variable Length Encoding), que es el método entrópico más usado en el estándar de codificación de vídeo H.264. La aceleración respecto a la mejor implementación anterior está entre 2.5x y 5.4x. Esta solución puede aprovecharse como el componente entrópico de codificadores H.264 software, y utilizarse en sistemas de compresión de vídeo e imagen en formatos distintos a H.264, como imágenes médicas. En segundo lugar, se ha desarrollado GUD-Canny, un detector de bordes de Canny no supervisado y distribuido. El sistema resuelve las principales limitaciones de las implementaciones del algoritmo de Canny, que son el cuello de botella causado por el proceso de histéresis y el uso de umbrales de histéresis fijos. Dada una imagen, esta se divide en un conjunto de sub-imágenes, y, para cada una de ellas, se calcula de forma no supervisada un par de umbrales de histéresis utilizando el método de MedinaCarnicer. El detector satisface el requisito de tiempo real, al ser 0.35 ms el tiempo promedio en detectar los bordes de una imagen 512x512. En tercer lugar, se ha realizado una implementación optimizada del método de compresión de datos VLE (Variable-Length Encoding), que es 2.6x más rápida en promedio que la mejor implementación anterior. Además, esta solución incluye un nuevo método scan inter-bloque, que se puede usar para acelerar la propia operación scan y otros algoritmos, como el de compactación. En el caso de la operación scan, se logra una aceleración de 1.62x si se usa el método propuesto en lugar del utilizado en la mejor implementación anterior de VLE. Esta tesis doctoral concluye con un capítulo sobre futuros trabajos de investigación que se pueden plantear a partir de sus contribuciones
    corecore