13 research outputs found

    Fast intra mode decision algorithm for H.263 to H.264/AVC transcoding

    Get PDF
    2007-2008 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Improved Intra Prediction of H.264/AVC

    Get PDF

    Fast intra mode decision algorithm for H.263 to H.264/AVC transcoding

    Full text link

    Optimizations for real-time implementation of H264/AVC video encoder on DSP processor

    Get PDF
    International audienceReal-time H.264/AVC high definition video encoding represents a challenging workload to most existing programmable processors. The new technologies of programmable processors such as Graphic Processor Unit (GPU) and multicore Digital signal Processor (DSP) offer a very promising solution to overcome these constraints. In this paper, an optimized implementation of H264/AVC video encoder on a single core among the six cores of TMS320C6472 DSP for Common Intermediate Format (CIF) (352x288) resolution is presented in order to move afterwards to a multicore implementation for standard and high definitions (SD,HD).Algorithmic optimization is applied to the intra prediction module to reduce the computational time. Furthermore, based on the DSP architectural features, various structural and hardware optimizations are adopted to minimize external memory access. The parallelism between CPU processing and data transfers is fully exploited using an Enhanced Direct Memory Access controller (EDMA). Experimental results show that the whole proposed optimizations, on a single core running at 700 MHz for CIF resolution, improve the encoding speed by up to 42.91%. They allow reaching the real-time encoding 25 f/s without inducing any Peak Signal to Noise Ratio (PSNR) degradation or bit-rate increase and make possible to achieve real time implementation for SD and HD resolutions when exploiting multicore features

    An Efficient Intra Prediction Algorithm for H.264/AVC High Profile

    Get PDF
    [[abstract]]A simple, highly efficient intra prediction algorithm to reduce the computational complexity of H.264/AVC High Profile is proposed. The algorithm combines two methods. The first method is a quant-based block-size selection decision that is based on the sum of the quantization AC coefficients among intra 8 × 8 mode predictions, combined with an error adjustment to select either intra 4 × 4 or intra 16 × 16 mode predictions. The second method is a novel direction-based prediction mode decision that is used to reduce the possible prediction modes for the rate-distortion (RD) optimization technique. Our experimental results demonstrate that the proposed algorithm reduces the encoding time by approximately 54% compared with that needed for an exhaustive search using the joint model reference software. The peak signal-to-noise ratio degradation is negligible, and the bit rate increment is minimal. Furthermore, the results show that our algorithm achieves a significant improvement in both computation performance and RD performance as compared with the existing algorithms.[[notice]]補正完畢[[incitationindex]]EI[[booktype]]電子

    Performance analysis of H.264 encoder for high-definition video transmission over ultra-wideband communication link.

    Get PDF
    With the technological advancement, entertainment has become revolutionized and the High-definition (HD) video has become a common feature of our modern amusement devices. Moreover, the demand for wireless transmission of HD video is rising increasingly for its ubiquitous nature, easy installation and relocation. The high bandwidth requirement is the main concern for wireless transmission of high quality video streams. Research has been going on by the consumer electronics industry to provide different solutions of this issue, for the last few years. In this research work, HD video transmission feasibility using the Ultra-wideband (UWB) communication channel is analyzed. The UWB channel is selected for its short-range, high-speed data transmission capability at low-cost, and low-power consumption. The maximum transmitting range of this technology is about 10 m at 100 Mbps data rate. Simulation is conducted by controlling key parameters, such as, in-loop deblocking filter, group of pictures, and quantization parameter of an H.264/AVC encoder. Here, standard HD video streams with different motion characteristics are used, and the impact of these parameters change on the reconstructed video quality and the broadcasting data rate are analyzed. Finally, a generalized parameters settings, and a video content dependent settings for an H.264/AVC encoder are proposed for different bandwidth requirements, as well as acceptable video quality. Performance evaluation of these parameters settings is performed, and the results are quite satisfactory as long as the symbol energy to noise power density ratio, Es/No, is above 15. With the proposed parameters settings, maximum 20 Mbps data rate is achieved with 33.5 dB Y-PSNR

    Traitement des signaux et images en temps réel ("implantation de H.264 sur MPSoC")

    Get PDF
    Cette thèse est élaborée en cotutelle entre l université Badji Mokhtar (Laboratoire LERICA) et l université de bourgogne (Laboratoire LE2I, UMR CNRS 5158). Elle constitue une contribution à l étude et l implantation de l encodeur H.264/AVC. Durent l évolution des normes de compression vidéo, une réalité sure est vérifiée de plus en plus : avoir une bonne performance du processus de compression nécessite l élaboration d équipements beaucoup plus performants en termes de puissance de calcul, de flexibilité et de portabilité et ceci afin de répondre aux exigences des différents traitements et satisfaire au critère Temps Réel . Pour assurer un temps réel pour ce genre d applications, une solution reste possible est l utilisation des systèmes sur puce (SoC) ou bien des systèmes multiprocesseurs sur puce (MPSoC) implantés sur des plateformes reconfigurables à base de circuit FPGA. L objective de cette thèse consiste à l étude et l implantation des algorithmes de traitement des signaux et images et en particulier la norme H.264/AVC, et cela dans le but d assurer un temps réel pour le cycle codage-décodage. Nous utilisons deux plateformes FPGA de Xilinx (ML501 et XUPV5). Dans la littérature, il existe déjà plusieurs implémentations du décodeur. Pour l encodeur, malgré les efforts énormes réalisés, il reste toujours du travail pour l optimisation des algorithmes et l extraction des parallélismes possibles surtout avec une variété de profils et de niveaux de la norme H.264/AVC.Dans un premier temps de cette thèse, nous proposons une implantation matérielle d un contrôleur mémoire spécialement pour l encodeur H.264/AVC. Ce contrôleur est réalisé en ajoutant, au contrôleur mémoire DDR2 des deux plateformes de Xilinx, une couche intelligente capable de calculer les adresses et récupérer les données nécessaires pour les différents modules de traitement de l encodeur. Ensuite, nous proposons des implantations matérielles (niveau RTL) des modules de traitement de l encodeur H.264. Sur ces implantations, nous allons exploiter les deux principes de parallélisme et de pipelining autorisé par l encodeur en vue de la grande dépendance inter-blocs. Nous avons ainsi proposé plusieurs améliorations et nouvelles techniques dans les modules de la chaine Intra et le filtre anti-blocs. A la fin de cette thèse, nous utilisons les modules réalisés en matériels pour la l implantation Matérielle/logicielle de l encodeur H.264/AVC. Des résultats de synthèse et de simulation, en utilisant les deux plateformes de Xilinx, sont montrés et comparés avec les autres implémentations existantesThis thesis has been carried out in joint supervision between the Badji Mokhtar University (LERICA Laboratory) and the University of Burgundy (LE2I laboratory, UMR CNRS 5158). It is a contribution to the study and implementation of the H.264/AVC encoder. The evolution in video coding standards have historically demanded stringent performances of the compression process, which imposes the development of platforms that perform much better in terms of computing power, flexibility and portability. Such demands are necessary to fulfill requirements of the different treatments and to meet "Real Time" processing constraints. In order to ensure real-time performances, a possible solution is to made use of systems on chip (SoC) or multiprocessor systems on chip (MPSoC) built on platforms based reconfigurable FPGAs. The objective of this thesis is the study and implementation of algorithms for signal and image processing (in particular the H.264/AVC standard); especial attention was given to provide real-time coding-decoding cycles. We use two FPGA platforms (ML501 and XUPV5 from Xilinx) to implement our architectures. In the literature, there are already several implementations of the decoder. For the encoder part, despite the enormous efforts made, work remains to optimize algorithms and extract the inherent parallelism of the architecture. This is especially true with a variety of profiles and levels of H.264/AVC. Initially, we proposed a hardware implementation of a memory controller specifically targeted to the H.264/AVC encoder. This controller is obtained by adding, to the DDR2 memory controller, an intelligent layer capable of calculating the addresses and to retrieve the necessary data for several of the processing modules of the encoder. Afterwards, we proposed hardware implementations (RTL) for the processing modules of the H.264 encoder. In these implementations, we made use of principles of parallelism and pipelining, taking into account the constraints imposed by the inter-block dependency in the encoder. We proposed several enhancements and new technologies in the channel Intra modules and the deblocking filter. At the end of this thesis, we use the modules implemented in hardware for implementing the H.264/AVC encoder in a hardware/software design. Synthesis and simulation results, using both platforms for Xilinx, are shown and compared with other existing implementationsDIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    Cross-layer Optimized Wireless Video Surveillance

    Get PDF
    A wireless video surveillance system contains three major components, the video capture and preprocessing, the video compression and transmission over wireless sensor networks (WSNs), and the video analysis at the receiving end. The coordination of different components is important for improving the end-to-end video quality, especially under the communication resource constraint. Cross-layer control proves to be an efficient measure for optimal system configuration. In this dissertation, we address the problem of implementing cross-layer optimization in the wireless video surveillance system. The thesis work is based on three research projects. In the first project, a single PTU (pan-tilt-unit) camera is used for video object tracking. The problem studied is how to improve the quality of the received video by jointly considering the coding and transmission process. The cross-layer controller determines the optimal coding and transmission parameters, according to the dynamic channel condition and the transmission delay. Multiple error concealment strategies are developed utilizing the special property of the PTU camera motion. In the second project, the binocular PTU camera is adopted for video object tracking. The presented work studied the fast disparity estimation algorithm and the 3D video transcoding over the WSN for real-time applications. The disparity/depth information is estimated in a coarse-to-fine manner using both local and global methods. The transcoding is coordinated by the cross-layer controller based on the channel condition and the data rate constraint, in order to achieve the best view synthesis quality. The third project is applied for multi-camera motion capture in remote healthcare monitoring. The challenge is the resource allocation for multiple video sequences. The presented cross-layer design incorporates the delay sensitive, content-aware video coding and transmission, and the adaptive video coding and transmission to ensure the optimal and balanced quality for the multi-view videos. In these projects, interdisciplinary study is conducted to synergize the surveillance system under the cross-layer optimization framework. Experimental results demonstrate the efficiency of the proposed schemes. The challenges of cross-layer design in existing wireless video surveillance systems are also analyzed to enlighten the future work. Adviser: Song C

    Cross-layer Optimized Wireless Video Surveillance

    Get PDF
    A wireless video surveillance system contains three major components, the video capture and preprocessing, the video compression and transmission over wireless sensor networks (WSNs), and the video analysis at the receiving end. The coordination of different components is important for improving the end-to-end video quality, especially under the communication resource constraint. Cross-layer control proves to be an efficient measure for optimal system configuration. In this dissertation, we address the problem of implementing cross-layer optimization in the wireless video surveillance system. The thesis work is based on three research projects. In the first project, a single PTU (pan-tilt-unit) camera is used for video object tracking. The problem studied is how to improve the quality of the received video by jointly considering the coding and transmission process. The cross-layer controller determines the optimal coding and transmission parameters, according to the dynamic channel condition and the transmission delay. Multiple error concealment strategies are developed utilizing the special property of the PTU camera motion. In the second project, the binocular PTU camera is adopted for video object tracking. The presented work studied the fast disparity estimation algorithm and the 3D video transcoding over the WSN for real-time applications. The disparity/depth information is estimated in a coarse-to-fine manner using both local and global methods. The transcoding is coordinated by the cross-layer controller based on the channel condition and the data rate constraint, in order to achieve the best view synthesis quality. The third project is applied for multi-camera motion capture in remote healthcare monitoring. The challenge is the resource allocation for multiple video sequences. The presented cross-layer design incorporates the delay sensitive, content-aware video coding and transmission, and the adaptive video coding and transmission to ensure the optimal and balanced quality for the multi-view videos. In these projects, interdisciplinary study is conducted to synergize the surveillance system under the cross-layer optimization framework. Experimental results demonstrate the efficiency of the proposed schemes. The challenges of cross-layer design in existing wireless video surveillance systems are also analyzed to enlighten the future work. Adviser: Song C
    corecore