17 research outputs found

    Application-Specific Cache and Prefetching for HEVC CABAC Decoding

    Get PDF
    Context-based Adaptive Binary Arithmetic Coding (CABAC) is the entropy coding module in the HEVC/H.265 video coding standard. As in its predecessor, H.264/AVC, CABAC is a well-known throughput bottleneck due to its strong data dependencies. Besides other optimizations, the replacement of the context model memory by a smaller cache has been proposed for hardware decoders, resulting in an improved clock frequency. However, the effect of potential cache misses has not been properly evaluated. This work fills the gap by performing an extensive evaluation of different cache configurations. Furthermore, it demonstrates that application-specific context model prefetching can effectively reduce the miss rate and increase the overall performance. The best results are achieved with two cache lines consisting of four or eight context models. The 2 × 8 cache allows a performance improvement of 13.2 percent to 16.7 percent compared to a non-cached decoder due to a 17 percent higher clock frequency and highly effective prefetching. The proposed HEVC/H.265 CABAC decoder allows the decoding of high-quality Full HD videos in real-time using few hardware resources on a low-power FPGA.EC/H2020/645500/EU/Improving European VoD Creative Industry with High Efficiency Video Delivery/Film26

    Reconfigurable Computing For Video Coding

    Get PDF
    Video coding is widely used in our daily life. Due to its high computational complexity, hardware implementation is usually preferred. In this research, we investigate both ASIC hardware design approach and reconfigurable hardware design approach for video coding applications. First, we present a unified architecture that can perform Discrete Cosine Transform (DCT), Inverse Discrete Cosine Transform (IDCT), DCT domain motion estimation and compensation (DCT-ME/MC). Our proposed architecture is a Wavefront Array-based Processor with a highly modular structure consisting of 8*8 Processing Elements (PEs). By utilizing statistical properties and arithmetic operations, it can be used as a high performance hardware accelerator for video transcoding applications. We show how different core algorithms can be mapped onto the same hardware fabric and can be executed through the pre-defined PEs. In addition to the simplified design process of the proposed architecture and savings of the hardware resources, we also demonstrate that high throughput rate can be achieved for IDCT and DCT-MC by fully utilizing the sparseness property of DCT coefficient matrix. Compared to fixed hardware architecture using ASIC design approach, reconfigurable hardware design approach has higher flexibility, lower cost, and faster time-to-market. We propose a self-reconfigurable platform which can reconfigure the architecture of DCT computations during run-time using dynamic partial reconfiguration. The scalable architecture for DCT computations can compute different number of DCT coefficients in the zig-zag scan order to adapt to different requirements, such as power consumption, hardware resource, and performance. We propose a configuration manager which is implemented in the embedded processor in order to adaptively control the reconfiguration of scalable DCT architecture during run-time. In addition, we use LZSS algorithm for compression of the partial bitstreams and on-chip BlockRAM as a cache to reduce latency overhead for loading the partial bitstreams from the off-chip memory for run-time reconfiguration. A hardware module is designed for parallel reconfiguration of the partial bitstreams. The experimental results show that our approach can reduce the external memory accesses by 69% and can achieve 400 MBytes/s reconfiguration rate. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration. Prediction algorithm of zero quantized DCT (ZQDCT) to control the run-time reconfiguration of the proposed scalable architecture has been used, and 12 different modes of DCT computations including zonal coding, multi-block processing, and parallel-sequential stage modes are supported to reduce power consumptions, required hardware resources, and computation time with a small quality degradation. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration to meet the requirements set by the users

    Etude et mise en place d'une plateforme d'adaptation multiservice embarquée pour la gestion de flux multimédia à différents niveaux logiciels et matériels

    Get PDF
    Les avancées technologiques ont permis la commercialisation à grande échelle de terminaux mobiles. De ce fait, l homme est de plus en plus connecté et partout. Ce nombre grandissant d usagers du réseau ainsi que la forte croissance du contenu disponible, aussi bien d un point de vue quantitatif que qualitatif saturent les réseaux et l augmentation des moyens matériels (passage à la fibre optique) ne suffisent pas. Pour surmonter cela, les réseaux doivent prendre en compte le type de contenu (texte, vidéo, ...) ainsi que le contexte d utilisation (état du réseau, capacité du terminal, ...) pour assurer une qualité d expérience optimum. A ce sujet, la vidéo fait partie des contenus les plus critiques. Ce type de contenu est non seulement de plus en plus consommé par les utilisateurs mais est aussi l un des plus contraignant en terme de ressources nécéssaires à sa distribution (taille serveur, bande passante, ). Adapter un contenu vidéo en fonction de l état du réseau (ajuster son débit binaire à la bande passante) ou des capacités du terminal (s assurer que le codec soit nativement supporté) est indispensable. Néanmoins, l adaptation vidéo est un processus qui nécéssite beaucoup de ressources. Cela est antinomique à son utilisation à grande echelle dans les appareils à bas coûts qui constituent aujourd hui une grande part dans l ossature du réseau Internet. Cette thÚse se concentre sur la conception d un systÚme d adaptation vidéo à bas coût et temps réel qui prendrait place dans ces réseaux du futur. AprÚs une analyse du contexte, un systÚme d adaptation générique est proposé et évalué en comparaison de l état de l art. Ce systÚme est implémenté sur un FPGA afin d assurer les performances (temps-réels) et la nécessité d une solution à bas coût. Enfin, une étude sur les effets indirects de l adaptation vidéo est menée.On the one hand, technology advances have led to the expansion of the handheld devices market. Thanks to this expansion, people are more and more connected and more and more data are exchanged over the Internet. On the other hand, this huge amound of data imposes drastic constrains in order to achieve sufficient quality. The Internet is now showing its limits to assure such quality. To answer nowadays limitations, a next generation Internet is envisioned. This new network takes into account the content nature (video, audio, ...) and the context (network state, terminal capabilities ...) to better manage its own resources. To this extend, video manipulation is one of the key concept that is highlighted in this arising context. Video content is more and more consumed and at the same time requires more and more resources. Adapting videos to the network state (reducing its bitrate to match available bandwidth) or to the terminal capabilities (screen size, supported codecs, ) appears mandatory and is foreseen to take place in real time in networking devices such as home gateways. However, video adaptation is a resource intensive task and must be implemented using hardware accelerators to meet the desired low cost and real time constraints.In this thesis, content- and context-awareness is first analyzed to be considered at the network side. Secondly, a generic low cost video adaptation system is proposed and compared to existing solutions as a trade-off between system complexity and quality. Then, hardware conception is tackled as this system is implemented in an FPGA based architecture. Finally, this system is used to evaluate the indirect effects of video adaptation; energy consumption reduction is achieved at the terminal side by reducing video characteristics thus permitting an increased user experience for End-Users.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    GPU Parallelization of HEVC In-Loop Filters

    Get PDF
    In the High Efficiency Video Coding (HEVC) standard, multiple decoding modules have been designed to take advantage of parallel processing. In particular, the HEVC in-loop filters (i.e., the deblocking filter and sample adaptive offset) were conceived to be exploited by parallel architectures. However, the type of the offered parallelism mostly suits the capabilities of multi-core CPUs, thus making a real challenge to efficiently exploit massively parallel architectures such as Graphic Processing Units (GPUs), mainly due to the existing data dependencies between the HEVC decoding procedures. In accordance, this paper presents a novel strategy to increase the amount of parallelism and the resulting performance of the HEVC in-loop filters on GPU devices. For this purpose, the proposed algorithm performs the HEVC filtering at frame-level and employs intrinsic GPU vector instructions. When compared to the state-of-the-art HEVC in-loop filter implementations, the proposed approach also reduces the amount of required memory transfers, thus further boosting the performance. Experimental results show that the proposed GPU in-loop filters deliver a significant improvement in decoding performance. For example, average frame rates of 76 frames per second (FPS) and 125 FPS for Ultra HD 4K are achieved on an embedded NVIDIA GPU for All Intra and Random Access configurations, respectively

    Etude et mise en place d’une plateforme d’adaptation multiservice embarquĂ©e pour la gestion de flux multimĂ©dia Ă  diffĂ©rents niveaux logiciels et matĂ©riels

    Get PDF
    On the one hand, technology advances have led to the expansion of the handheld devices market. Thanks to this expansion, people are more and more connected and more and more data are exchanged over the Internet. On the other hand, this huge amound of data imposes drastic constrains in order to achieve sufficient quality. The Internet is now showing its limits to assure such quality. To answer nowadays limitations, a next generation Internet is envisioned. This new network takes into account the content nature (video, audio, ...) and the context (network state, terminal capabilities ...) to better manage its own resources. To this extend, video manipulation is one of the key concept that is highlighted in this arising context. Video content is more and more consumed and at the same time requires more and more resources. Adapting videos to the network state (reducing its bitrate to match available bandwidth) or to the terminal capabilities (screen size, supported codecs, 
) appears mandatory and is foreseen to take place in real time in networking devices such as home gateways. However, video adaptation is a resource intensive task and must be implemented using hardware accelerators to meet the desired low cost and real time constraints.In this thesis, content- and context-awareness is first analyzed to be considered at the network side. Secondly, a generic low cost video adaptation system is proposed and compared to existing solutions as a trade-off between system complexity and quality. Then, hardware conception is tackled as this system is implemented in an FPGA based architecture. Finally, this system is used to evaluate the indirect effects of video adaptation; energy consumption reduction is achieved at the terminal side by reducing video characteristics thus permitting an increased user experience for End-Users.Les avancĂ©es technologiques ont permis la commercialisation Ă  grande Ă©chelle de terminaux mobiles. De ce fait, l’homme est de plus en plus connectĂ© et partout. Ce nombre grandissant d’usagers du rĂ©seau ainsi que la forte croissance du contenu disponible, aussi bien d’un point de vue quantitatif que qualitatif saturent les rĂ©seaux et l’augmentation des moyens matĂ©riels (passage Ă  la fibre optique) ne suffisent pas. Pour surmonter cela, les rĂ©seaux doivent prendre en compte le type de contenu (texte, vidĂ©o, ...) ainsi que le contexte d’utilisation (Ă©tat du rĂ©seau, capacitĂ© du terminal, ...) pour assurer une qualitĂ© d’expĂ©rience optimum. A ce sujet, la vidĂ©o fait partie des contenus les plus critiques. Ce type de contenu est non seulement de plus en plus consommĂ© par les utilisateurs mais est aussi l’un des plus contraignant en terme de ressources nĂ©cĂ©ssaires Ă  sa distribution (taille serveur, bande passante, 
). Adapter un contenu vidĂ©o en fonction de l’état du rĂ©seau (ajuster son dĂ©bit binaire Ă  la bande passante) ou des capacitĂ©s du terminal (s’assurer que le codec soit nativement supportĂ©) est indispensable. NĂ©anmoins, l’adaptation vidĂ©o est un processus qui nĂ©cĂ©ssite beaucoup de ressources. Cela est antinomique Ă  son utilisation Ă  grande echelle dans les appareils Ă  bas coĂ»ts qui constituent aujourd’hui une grande part dans l’ossature du rĂ©seau Internet. Cette thĂšse se concentre sur la conception d’un systĂšme d’adaptation vidĂ©o Ă  bas coĂ»t et temps rĂ©el qui prendrait place dans ces rĂ©seaux du futur. AprĂšs une analyse du contexte, un systĂšme d’adaptation gĂ©nĂ©rique est proposĂ© et Ă©valuĂ© en comparaison de l’état de l’art. Ce systĂšme est implĂ©mentĂ© sur un FPGA afin d’assurer les performances (temps-rĂ©els) et la nĂ©cessitĂ© d’une solution Ă  bas coĂ»t. Enfin, une Ă©tude sur les effets indirects de l’adaptation vidĂ©o est menĂ©e

    Robust P2P Live Streaming

    Get PDF
    Projecte fet en col.laboraciĂł amb la FundaciĂł i2CATThe provisioning of robust real-time communication services (voice, video, etc.) or media contents through the Internet in a distributed manner is an important challenge, which will strongly influence in current and future Internet evolution. Aware of this, we are developing a project named Trilogy leaded by the i2CAT Foundation, which has as main pillar the study, development and evaluation of Peer-to-Peer (P2P) Live streaming architectures for the distribution of high-quality media contents. In this context, this work concretely covers media coding aspects and proposes the use of Multiple Description Coding (MDC) as a flexible solution for providing robust and scalable live streaming over P2P networks. This work describes current state of the art in media coding techniques and P2P streaming architectures, presents the implemented prototype as well as its simulation and validation results

    A 2D DWT architecture suitable for the Embedded Zerotree Wavelet Algorithm

    Get PDF
    Digital Imaging has had an enormous impact on industrial applications such as the Internet and video-phone systems. However, demand for industrial applications is growing enormously. In particular, internet application users are, growing at a near exponential rate. The sharp increase in applications using digital images has caused much emphasis on the fields of image coding, storage, processing and communications. New techniques are continuously developed with the main aim of increasing efficiency. Image coding is in particular a field of great commercial interest. A digital image requires a large amount of data to be created. This large amount of data causes many problems when storing, transmitting or processing the image. Reducing the amount of data that can be used to represent an image is the main objective of image coding. Since the main objective is to reduce the amount of data that represents an image, various techniques have been developed and are continuously developed to increase efficiency. The JPEG image coding standard has enjoyed widespread acceptance, and the industry continues to explore its various implementation issues. However, recent research indicates multiresolution based image coding is a far superior alternative. A recent development in the field of image coding is the use of Embedded Zerotree Wavelet (EZW) as the technique to achieve image compression. One of The aims of this theses is to explain how this technique is superior to other current coding standards. It will be seen that an essential part orthis method of image coding is the use of multi resolution analysis, a subband system whereby the subbands arc logarithmically spaced in frequency and represent an octave band decomposition. The block structure that implements this function is termed the two dimensional Discrete Wavelet Transform (2D-DWT). The 20 DWT is achieved by several architectures and these are analysed in order to choose the best suitable architecture for the EZW coder. Finally, this architecture is implemented and verified using the Synopsys Behavioural Compiler and recommendations are made based on experimental findings

    Robust P2P Live Streaming

    Get PDF
    Projecte fet en col.laboraciĂł amb la FundaciĂł i2CATThe provisioning of robust real-time communication services (voice, video, etc.) or media contents through the Internet in a distributed manner is an important challenge, which will strongly influence in current and future Internet evolution. Aware of this, we are developing a project named Trilogy leaded by the i2CAT Foundation, which has as main pillar the study, development and evaluation of Peer-to-Peer (P2P) Live streaming architectures for the distribution of high-quality media contents. In this context, this work concretely covers media coding aspects and proposes the use of Multiple Description Coding (MDC) as a flexible solution for providing robust and scalable live streaming over P2P networks. This work describes current state of the art in media coding techniques and P2P streaming architectures, presents the implemented prototype as well as its simulation and validation results
    corecore