45 research outputs found

    DEEP LEARNING FOR IMAGE RESTORATION AND ROBOTIC VISION

    Get PDF
    Traditional model-based approach requires the formulation of mathematical model, and the model often has limited performance. The quality of an image may degrade due to a variety of reasons: It could be the context of scene is affected by weather conditions such as haze, rain, and snow; It\u27s also possible that there is some noise generated during image processing/transmission (e.g., artifacts generated during compression.). The goal of image restoration is to restore the image back to desirable quality both subjectively and objectively. Agricultural robotics is gaining interest these days since most agricultural works are lengthy and repetitive. Computer vision is crucial to robots especially the autonomous ones. However, it is challenging to have a precise mathematical model to describe the aforementioned problems. Compared with traditional approach, learning-based approach has an edge since it does not require any model to describe the problem. Moreover, learning-based approach now has the best-in-class performance on most of the vision problems such as image dehazing, super-resolution, and image recognition. In this dissertation, we address the problem of image restoration and robotic vision with deep learning. These two problems are highly related with each other from a unique network architecture perspective: It is essential to select appropriate networks when dealing with different problems. Specifically, we solve the problems of single image dehazing, High Efficiency Video Coding (HEVC) loop filtering and super-resolution, and computer vision for an autonomous robot. Our technical contributions are threefold: First, we propose to reformulate haze as a signal-dependent noise which allows us to uncover it by learning a structural residual. Based on our novel reformulation, we solve dehazing with recursive deep residual network and generative adversarial network which emphasizes on objective and perceptual quality, respectively. Second, we replace traditional filters in HEVC with a Convolutional Neural Network (CNN) filter. We show that our CNN filter could achieve 7% BD-rate saving when compared with traditional filters such as bilateral and deblocking filter. We also propose to incorporate a multi-scale CNN super-resolution module into HEVC. Such post-processing module could improve visual quality under extremely low bandwidth. Third, a transfer learning technique is implemented to support vision and autonomous decision making of a precision pollination robot. Good experimental results are reported with real-world data

    Overview of the Low Complexity Enhancement Video Coding (LCEVC) Standard

    Get PDF
    The Low Complexity Enhancement Video Coding (LCEVC) specification is a recent standard approved by the ISO/IEC JTC 1/SC 29/WG04 (MPEG) Video Coding. The main goal of LCEVC is to provide a standalone toolset for the enhancement of any other existing codec. It works on top of other coding schemes, resulting in a multi-layer video coding technology, but unlike existing scalable video codecs, adds enhancement layers completely independent from the base video. The LCEVC technology takes as input the decoded video at lower resolution and adds up to two enhancement sub-layers of residuals encoded with specialized low-complexity coding tools, such as simple temporal prediction, frequency transform, quantization, and entropy encoding. This paper provides an overview of the main features of the LCEVC standard: high compression efficiency, low complexity, minimized requirements of memory and processing power

    Challenges and solutions in H.265/HEVC for integrating consumer electronics in professional video systems

    Get PDF

    Nouvelles mĂ©thodes de prĂ©diction inter-images pour la compression d’images et de vidĂ©os

    Get PDF
    Due to the large availability of video cameras and new social media practices, as well as the emergence of cloud services, images and videosconstitute today a significant amount of the total data that is transmitted over the internet. Video streaming applications account for more than 70% of the world internet bandwidth. Whereas billions of images are already stored in the cloud and millions are uploaded every day. The ever growing streaming and storage requirements of these media require the constant improvements of image and video coding tools. This thesis aims at exploring novel approaches for improving current inter-prediction methods. Such methods leverage redundancies between similar frames, and were originally developed in the context of video compression. In a first approach, novel global and local inter-prediction tools are associated to improve the efficiency of image sets compression schemes based on video codecs. By leveraging a global geometric and photometric compensation with a locally linear prediction, significant improvements can be obtained. A second approach is then proposed which introduces a region-based inter-prediction scheme. The proposed method is able to improve the coding performances compared to existing solutions by estimating and compensating geometric and photometric distortions on a semi-local level. This approach is then adapted and validated in the context of video compression. Bit-rate improvements are obtained, especially for sequences displaying complex real-world motions such as zooms and rotations. The last part of the thesis focuses on deep learning approaches for inter-prediction. Deep neural networks have shown striking results for a large number of computer vision tasks over the last years. Deep learning based methods proposed for frame interpolation applications are studied here in the context of video compression. Coding performance improvements over traditional motion estimation and compensation methods highlight the potential of these deep architectures.En raison de la grande disponibilitĂ© des dispositifs de capture vidĂ©o et des nouvelles pratiques liĂ©es aux rĂ©seaux sociaux, ainsi qu’à l’émergence desservices en ligne, les images et les vidĂ©os constituent aujourd’hui une partie importante de donnĂ©es transmises sur internet. Les applications de streaming vidĂ©o reprĂ©sentent ainsi plus de 70% de la bande passante totale de l’internet. Des milliards d’images sont dĂ©jĂ  stockĂ©es dans le cloud et des millions y sont tĂ©lĂ©chargĂ©s chaque jour. Les besoins toujours croissants en streaming et stockage nĂ©cessitent donc une amĂ©lioration constante des outils de compression d’image et de vidĂ©o. Cette thĂšse vise Ă  explorer des nouvelles approches pour amĂ©liorer les mĂ©thodes actuelles de prĂ©diction inter-images. De telles mĂ©thodes tirent parti des redondances entre images similaires, et ont Ă©tĂ© dĂ©veloppĂ©es Ă  l’origine dans le contexte de la vidĂ©o compression. Dans une premiĂšre partie, de nouveaux outils de prĂ©diction inter globaux et locaux sont associĂ©s pour amĂ©liorer l’efficacitĂ© des schĂ©mas de compression de bases de donnĂ©es d’image. En associant une compensation gĂ©omĂ©trique et photomĂ©trique globale avec une prĂ©diction linĂ©aire locale, des amĂ©liorations significatives peuvent ĂȘtre obtenues. Une seconde approche est ensuite proposĂ©e qui introduit un schĂ©ma deprĂ©diction inter par rĂ©gions. La mĂ©thode proposĂ©e est en mesure d’amĂ©liorer les performances de codage par rapport aux solutions existantes en estimant et en compensant les distorsions gĂ©omĂ©triques et photomĂ©triques Ă  une Ă©chelle semi locale. Cette approche est ensuite adaptĂ©e et validĂ©e dans le cadre de la compression vidĂ©o. Des amĂ©liorations en rĂ©duction de dĂ©bit sont obtenues, en particulier pour les sĂ©quences prĂ©sentant des mouvements complexes rĂ©els tels que des zooms et des rotations. La derniĂšre partie de la thĂšse se concentre sur l’étude des mĂ©thodes d’apprentissage en profondeur dans le cadre de la prĂ©diction inter. Ces derniĂšres annĂ©es, les rĂ©seaux de neurones profonds ont obtenu des rĂ©sultats impressionnants pour un grand nombre de tĂąches de vision par ordinateur. Les mĂ©thodes basĂ©es sur l’apprentissage en profondeur proposĂ©esĂ  l’origine pour de l’interpolation d’images sont Ă©tudiĂ©es ici dans le contexte de la compression vidĂ©o. Des amĂ©liorations en terme de performances de codage sont obtenues par rapport aux mĂ©thodes d’estimation et de compensation de mouvements traditionnelles. Ces rĂ©sultats mettent en Ă©vidence le fort potentiel de ces architectures profondes dans le domaine de la compression vidĂ©o
    corecore