153 research outputs found

    DCT-based video downscaling transcoder using split and merge technique

    Get PDF
    2005-2006 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Dynamic region of interest transcoding for multipoint video conferencing

    Get PDF
    This paper presents a region of interest transcoding scheme for multipoint video conferencing to enhance the visual quality. In a multipoint videoconference, usually there are only one or two active conferees at one time which are the regions of interest to the other conferees involved. We propose a Dynamic Sub-Window Skipping (DSWS) scheme to firstly identify the active participants from the multiple incoming encoded video streams by calculating the motion activity of each sub-window, and secondly reduce the frame-rates of the motion inactive participants by skipping these less-important subwindows. The bits saved by the skipping operation are reallocated to the active sub-windows to enhance the regions of interest. We also propose a low-complexity scheme to compose and trace the unavailable motion vectors with a good accuracy in the dropped inactive sub-windows after performing the DSWS. Simulation results show that the proposed methods not only significantly improve the visual quality on the active subwindows without introducing serious visual quality degradation in the inactive ones, but also reduce the computational complexity and avoid whole-frame skipping. Moreover, the proposed algorithm is fully compatible with the H.263 video coding standard. 1

    Deep Video Precoding

    Get PDF
    Several groups worldwide are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG H.264/AVC, H.265/HEVC, VVC, Google VP9 and AOMedia AV1, AV2, as well as existing container and transport formats, without imposing any changes at the client side. Such compatibility is a crucial aspect when it comes to practical deployment, especially when considering the fact that the video content industry and hardware manufacturers are expected to remain committed to supporting these standards for the foreseeable future. We propose to use deep neural networks as precoders for current and future video codecs and adaptive video streaming systems. In our current design, the core precoding component comprises a cascaded structure of downscaling neural networks that operates during video encoding, prior to transmission. This is coupled with a precoding mode selection algorithm for each independently-decodable stream segment, which adjusts the downscaling factor according to scene characteristics, the utilized encoder, and the desired bitrate and encoding configuration. Our framework is compatible with all current and future codec and transport standards, as our deep precoding network structure is trained in conjunction with linear upscaling filters (e.g., the bilinear filter), which are supported by all web video players. Extensive evaluation on FHD (1080p) and UHD (2160p) content and with widely-used H.264/AVC, H.265/HEVC and VP9 encoders, as well as a preliminary evaluation with the current test model of VVC (v.6.2rc1), shows that coupling such standards with the proposed deep video precoding allows for 8% to 52% rate reduction under encoding configurations and bitrates suitable for video-on-demand adaptive streaming systems. The use of precoding can also lead to encoding complexity reduction, which is essential for cost-effective cloud deployment of complex encoders like H.265/HEVC, VP9 and VVC, especially when considering the prominence of high-resolution adaptive video streaming

    Etude et mise en place d’une plateforme d’adaptation multiservice embarquĂ©e pour la gestion de flux multimĂ©dia Ă  diffĂ©rents niveaux logiciels et matĂ©riels

    Get PDF
    On the one hand, technology advances have led to the expansion of the handheld devices market. Thanks to this expansion, people are more and more connected and more and more data are exchanged over the Internet. On the other hand, this huge amound of data imposes drastic constrains in order to achieve sufficient quality. The Internet is now showing its limits to assure such quality. To answer nowadays limitations, a next generation Internet is envisioned. This new network takes into account the content nature (video, audio, ...) and the context (network state, terminal capabilities ...) to better manage its own resources. To this extend, video manipulation is one of the key concept that is highlighted in this arising context. Video content is more and more consumed and at the same time requires more and more resources. Adapting videos to the network state (reducing its bitrate to match available bandwidth) or to the terminal capabilities (screen size, supported codecs, 
) appears mandatory and is foreseen to take place in real time in networking devices such as home gateways. However, video adaptation is a resource intensive task and must be implemented using hardware accelerators to meet the desired low cost and real time constraints.In this thesis, content- and context-awareness is first analyzed to be considered at the network side. Secondly, a generic low cost video adaptation system is proposed and compared to existing solutions as a trade-off between system complexity and quality. Then, hardware conception is tackled as this system is implemented in an FPGA based architecture. Finally, this system is used to evaluate the indirect effects of video adaptation; energy consumption reduction is achieved at the terminal side by reducing video characteristics thus permitting an increased user experience for End-Users.Les avancĂ©es technologiques ont permis la commercialisation Ă  grande Ă©chelle de terminaux mobiles. De ce fait, l’homme est de plus en plus connectĂ© et partout. Ce nombre grandissant d’usagers du rĂ©seau ainsi que la forte croissance du contenu disponible, aussi bien d’un point de vue quantitatif que qualitatif saturent les rĂ©seaux et l’augmentation des moyens matĂ©riels (passage Ă  la fibre optique) ne suffisent pas. Pour surmonter cela, les rĂ©seaux doivent prendre en compte le type de contenu (texte, vidĂ©o, ...) ainsi que le contexte d’utilisation (Ă©tat du rĂ©seau, capacitĂ© du terminal, ...) pour assurer une qualitĂ© d’expĂ©rience optimum. A ce sujet, la vidĂ©o fait partie des contenus les plus critiques. Ce type de contenu est non seulement de plus en plus consommĂ© par les utilisateurs mais est aussi l’un des plus contraignant en terme de ressources nĂ©cĂ©ssaires Ă  sa distribution (taille serveur, bande passante, 
). Adapter un contenu vidĂ©o en fonction de l’état du rĂ©seau (ajuster son dĂ©bit binaire Ă  la bande passante) ou des capacitĂ©s du terminal (s’assurer que le codec soit nativement supportĂ©) est indispensable. NĂ©anmoins, l’adaptation vidĂ©o est un processus qui nĂ©cĂ©ssite beaucoup de ressources. Cela est antinomique Ă  son utilisation Ă  grande echelle dans les appareils Ă  bas coĂ»ts qui constituent aujourd’hui une grande part dans l’ossature du rĂ©seau Internet. Cette thĂšse se concentre sur la conception d’un systĂšme d’adaptation vidĂ©o Ă  bas coĂ»t et temps rĂ©el qui prendrait place dans ces rĂ©seaux du futur. AprĂšs une analyse du contexte, un systĂšme d’adaptation gĂ©nĂ©rique est proposĂ© et Ă©valuĂ© en comparaison de l’état de l’art. Ce systĂšme est implĂ©mentĂ© sur un FPGA afin d’assurer les performances (temps-rĂ©els) et la nĂ©cessitĂ© d’une solution Ă  bas coĂ»t. Enfin, une Ă©tude sur les effets indirects de l’adaptation vidĂ©o est menĂ©e

    Etude et mise en place d'une plateforme d'adaptation multiservice embarquée pour la gestion de flux multimédia à différents niveaux logiciels et matériels

    Get PDF
    Les avancées technologiques ont permis la commercialisation à grande échelle de terminaux mobiles. De ce fait, l homme est de plus en plus connecté et partout. Ce nombre grandissant d usagers du réseau ainsi que la forte croissance du contenu disponible, aussi bien d un point de vue quantitatif que qualitatif saturent les réseaux et l augmentation des moyens matériels (passage à la fibre optique) ne suffisent pas. Pour surmonter cela, les réseaux doivent prendre en compte le type de contenu (texte, vidéo, ...) ainsi que le contexte d utilisation (état du réseau, capacité du terminal, ...) pour assurer une qualité d expérience optimum. A ce sujet, la vidéo fait partie des contenus les plus critiques. Ce type de contenu est non seulement de plus en plus consommé par les utilisateurs mais est aussi l un des plus contraignant en terme de ressources nécéssaires à sa distribution (taille serveur, bande passante, ). Adapter un contenu vidéo en fonction de l état du réseau (ajuster son débit binaire à la bande passante) ou des capacités du terminal (s assurer que le codec soit nativement supporté) est indispensable. Néanmoins, l adaptation vidéo est un processus qui nécéssite beaucoup de ressources. Cela est antinomique à son utilisation à grande echelle dans les appareils à bas coûts qui constituent aujourd hui une grande part dans l ossature du réseau Internet. Cette thÚse se concentre sur la conception d un systÚme d adaptation vidéo à bas coût et temps réel qui prendrait place dans ces réseaux du futur. AprÚs une analyse du contexte, un systÚme d adaptation générique est proposé et évalué en comparaison de l état de l art. Ce systÚme est implémenté sur un FPGA afin d assurer les performances (temps-réels) et la nécessité d une solution à bas coût. Enfin, une étude sur les effets indirects de l adaptation vidéo est menée.On the one hand, technology advances have led to the expansion of the handheld devices market. Thanks to this expansion, people are more and more connected and more and more data are exchanged over the Internet. On the other hand, this huge amound of data imposes drastic constrains in order to achieve sufficient quality. The Internet is now showing its limits to assure such quality. To answer nowadays limitations, a next generation Internet is envisioned. This new network takes into account the content nature (video, audio, ...) and the context (network state, terminal capabilities ...) to better manage its own resources. To this extend, video manipulation is one of the key concept that is highlighted in this arising context. Video content is more and more consumed and at the same time requires more and more resources. Adapting videos to the network state (reducing its bitrate to match available bandwidth) or to the terminal capabilities (screen size, supported codecs, ) appears mandatory and is foreseen to take place in real time in networking devices such as home gateways. However, video adaptation is a resource intensive task and must be implemented using hardware accelerators to meet the desired low cost and real time constraints.In this thesis, content- and context-awareness is first analyzed to be considered at the network side. Secondly, a generic low cost video adaptation system is proposed and compared to existing solutions as a trade-off between system complexity and quality. Then, hardware conception is tackled as this system is implemented in an FPGA based architecture. Finally, this system is used to evaluate the indirect effects of video adaptation; energy consumption reduction is achieved at the terminal side by reducing video characteristics thus permitting an increased user experience for End-Users.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Video-assisted Overtaking System enabled by V2V Communications

    Get PDF
    V2X (Vehicle-to-Everything) is a promising technology to diminish road hazards and increase driving safety. This thesis focuses in the transmission of video between vehicles (V2V, Vehicle-to-Vehicle) in an overtaking situation, helping drivers to be more aware and less error-prone in these situations. In the implementation, the vehicle reads from vehicle's CAN and GPS data to setup the system, streams his Line of Sight to the overtaking vehicle and uses DSRC as the communication technology

    Efficient HEVC-based video adaptation using transcoding

    Get PDF
    In a video transmission system, it is important to take into account the great diversity of the network/end-user constraints. On the one hand, video content is typically streamed over a network that is characterized by different bandwidth capacities. In many cases, the bandwidth is insufficient to transfer the video at its original quality. On the other hand, a single video is often played by multiple devices like PCs, laptops, and cell phones. Obviously, a single video would not satisfy their different constraints. These diversities of the network and devices capacity lead to the need for video adaptation techniques, e.g., a reduction of the bit rate or spatial resolution. Video transcoding, which modifies a property of the video without the change of the coding format, has been well-known as an efficient adaptation solution. However, this approach comes along with a high computational complexity, resulting in huge energy consumption in the network and possibly network latency. This presentation provides several optimization strategies for the transcoding process of HEVC (the latest High Efficiency Video Coding standard) video streams. First, the computational complexity of a bit rate transcoder (transrater) is reduced. We proposed several techniques to speed-up the encoder of a transrater, notably a machine-learning-based approach and a novel coding-mode evaluation strategy have been proposed. Moreover, the motion estimation process of the encoder has been optimized with the use of decision theory and the proposed fast search patterns. Second, the issues and challenges of a spatial transcoder have been solved by using machine-learning algorithms. Thanks to their great performance, the proposed techniques are expected to significantly help HEVC gain popularity in a wide range of modern multimedia applications

    Efficient Support for Application-Specific Video Adaptation

    Get PDF
    As video applications become more diverse, video must be adapted in different ways to meet the requirements of different applications when there are insufficient resources. In this dissertation, we address two sorts of requirements that cannot be addressed by existing video adaptation technologies: (i) accommodating large variations in resolution and (ii) collecting video effectively in a multi-hop sensor network. In addition, we also address requirements for implementing video adaptation in a sensor network. Accommodating large variation in resolution is required by the existence of display devices with widely disparate screen sizes. Existing resolution adaptation technologies usually aim at adapting video between two resolutions. We examine the limitations of these technologies that prevent them from supporting a large number of resolutions efficiently. We propose several hybrid schemes and study their performance. Among these hybrid schemes, Bonneville, a framework that combines multiple encodings with limited scalability, can make good trade-offs when organizing compressed video to support a wide range of resolutions. Video collection in a sensor network requires adapting video in a multi-hop storeand- forward network and with multiple video sources. This task cannot be supported effectively by existing adaptation technologies, which are designed for real-time streaming applications from a single source over IP-style end-to-end connections. We propose to adapt video in the network instead of at the network edge. We also propose a framework, Steens, to compose adaptation mechanisms on multiple nodes. We design two signaling protocols in Steens to coordinate multiple nodes. Our simulations show that in-network adaptation can use buffer space on intermediate nodes for adaptation and achieve better video quality than conventional network-edge adaptation. Our simulations also show that explicit collaboration among multiple nodes through signaling can improve video quality, waste less bandwidth, and maintain bandwidth-sharing fairness. The implementation of video adaptation in a sensor network requires system support for programmability, retaskability, and high performance. We propose Cascades, a component-based framework, to provide the required support. A prototype implementation of Steens in this framework shows that the performance overhead is less than 5% compared to a hard-coded C implementation
    • 

    corecore