7 research outputs found

    Merge Operation Effect On Image Compression Using Fractal Technique

    Get PDF
    Fractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually

    Merge Operation Effect On Image Compression Using Fractal Technique

    Get PDF
    Fractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually

    Fitting and tracking of a scene model in very low bit rate video coding

    Get PDF

    Nouvelles mĂ©thodes de prĂ©diction inter-images pour la compression d’images et de vidĂ©os

    Get PDF
    Due to the large availability of video cameras and new social media practices, as well as the emergence of cloud services, images and videosconstitute today a significant amount of the total data that is transmitted over the internet. Video streaming applications account for more than 70% of the world internet bandwidth. Whereas billions of images are already stored in the cloud and millions are uploaded every day. The ever growing streaming and storage requirements of these media require the constant improvements of image and video coding tools. This thesis aims at exploring novel approaches for improving current inter-prediction methods. Such methods leverage redundancies between similar frames, and were originally developed in the context of video compression. In a first approach, novel global and local inter-prediction tools are associated to improve the efficiency of image sets compression schemes based on video codecs. By leveraging a global geometric and photometric compensation with a locally linear prediction, significant improvements can be obtained. A second approach is then proposed which introduces a region-based inter-prediction scheme. The proposed method is able to improve the coding performances compared to existing solutions by estimating and compensating geometric and photometric distortions on a semi-local level. This approach is then adapted and validated in the context of video compression. Bit-rate improvements are obtained, especially for sequences displaying complex real-world motions such as zooms and rotations. The last part of the thesis focuses on deep learning approaches for inter-prediction. Deep neural networks have shown striking results for a large number of computer vision tasks over the last years. Deep learning based methods proposed for frame interpolation applications are studied here in the context of video compression. Coding performance improvements over traditional motion estimation and compensation methods highlight the potential of these deep architectures.En raison de la grande disponibilitĂ© des dispositifs de capture vidĂ©o et des nouvelles pratiques liĂ©es aux rĂ©seaux sociaux, ainsi qu’à l’émergence desservices en ligne, les images et les vidĂ©os constituent aujourd’hui une partie importante de donnĂ©es transmises sur internet. Les applications de streaming vidĂ©o reprĂ©sentent ainsi plus de 70% de la bande passante totale de l’internet. Des milliards d’images sont dĂ©jĂ  stockĂ©es dans le cloud et des millions y sont tĂ©lĂ©chargĂ©s chaque jour. Les besoins toujours croissants en streaming et stockage nĂ©cessitent donc une amĂ©lioration constante des outils de compression d’image et de vidĂ©o. Cette thĂšse vise Ă  explorer des nouvelles approches pour amĂ©liorer les mĂ©thodes actuelles de prĂ©diction inter-images. De telles mĂ©thodes tirent parti des redondances entre images similaires, et ont Ă©tĂ© dĂ©veloppĂ©es Ă  l’origine dans le contexte de la vidĂ©o compression. Dans une premiĂšre partie, de nouveaux outils de prĂ©diction inter globaux et locaux sont associĂ©s pour amĂ©liorer l’efficacitĂ© des schĂ©mas de compression de bases de donnĂ©es d’image. En associant une compensation gĂ©omĂ©trique et photomĂ©trique globale avec une prĂ©diction linĂ©aire locale, des amĂ©liorations significatives peuvent ĂȘtre obtenues. Une seconde approche est ensuite proposĂ©e qui introduit un schĂ©ma deprĂ©diction inter par rĂ©gions. La mĂ©thode proposĂ©e est en mesure d’amĂ©liorer les performances de codage par rapport aux solutions existantes en estimant et en compensant les distorsions gĂ©omĂ©triques et photomĂ©triques Ă  une Ă©chelle semi locale. Cette approche est ensuite adaptĂ©e et validĂ©e dans le cadre de la compression vidĂ©o. Des amĂ©liorations en rĂ©duction de dĂ©bit sont obtenues, en particulier pour les sĂ©quences prĂ©sentant des mouvements complexes rĂ©els tels que des zooms et des rotations. La derniĂšre partie de la thĂšse se concentre sur l’étude des mĂ©thodes d’apprentissage en profondeur dans le cadre de la prĂ©diction inter. Ces derniĂšres annĂ©es, les rĂ©seaux de neurones profonds ont obtenu des rĂ©sultats impressionnants pour un grand nombre de tĂąches de vision par ordinateur. Les mĂ©thodes basĂ©es sur l’apprentissage en profondeur proposĂ©esĂ  l’origine pour de l’interpolation d’images sont Ă©tudiĂ©es ici dans le contexte de la compression vidĂ©o. Des amĂ©liorations en terme de performances de codage sont obtenues par rapport aux mĂ©thodes d’estimation et de compensation de mouvements traditionnelles. Ces rĂ©sultats mettent en Ă©vidence le fort potentiel de ces architectures profondes dans le domaine de la compression vidĂ©o

    Regionenbasierte Partitionierung bei fraktaler Bildkompression mit Quadtrees

    Get PDF
    Fraktale Bildcodierung ist ein leistungsfĂ€higes Verfahren zur Kompression von Bilddaten. In der vorliegenden Arbeit werden zwei verschiedene AnsĂ€tze zur notwendigen Partitionierung des zu codierenden Bildes untersucht. Beide Typen zĂ€hlen zu den regionenbasierten, hochadaptiven Methoden zur Bildpartitionierung, wobei das Bild zunĂ€chst in Grundblöcke zerlegt wird, die anschließend geeignet zu Regionen zusammengefaßt werden. Bei der ersten, bereits in frĂŒheren Arbeiten eingehend untersuchten Methode bestehen die Grundpartitionen aus quadratischen Blöcken gleicher GrĂ¶ĂŸe. Bei der zweiten zu untersuchenden Methode werden die Grundblöcke durch eine Quadtree-Zerlegung gebildet und besitzen damit unterschiedliche GrĂ¶ĂŸen. Nach der Anwendung eines entsprechenden Regionen-Merging-Verfahrens ergeben sich Partitionen, die sich sowohl in Struktur als auch in der zur Abspeicherung benötigten Anzahl von Bits unterscheiden. Einerseits weisen die regionenbasierten Partitionen mit Quadtrees eine geradlinigere Struktur auf, weshalb sie sich mit arithmetischer Codierung besser komprimieren lassen als regionenbasierte Partitionen mit uniformen Grundblöcken. Andererseits liefert der Quadtree-basierte Ansatz eine meßbar schlechtere QualitĂ€t des decodierten Bildes bei gleicher Anzahl von Regionen. Diese Unterschiede werden in dieser Arbeit untersucht und erlĂ€utert. Dazu werden die in der Literatur vorhandenen AnsĂ€tze aufgegriffen und weitere Verfahren vorgestellt, die zu einer effizienteren Partitionsabspeicherung fĂŒhren. Versuche haben gezeigt, daß der Quadtree-basierte Ansatz mit den vorgestellten Neuerungen zu leicht besseren Ergebnissen bezĂŒglich des Rekonstruktionsfehlers als der uniforme Ansatz fĂŒhrt. Die erreichten Werte stellen die zur Zeit besten Resultate bei fraktaler Bildkompression im Ortsraum dar. Auch im Hinblick auf eine schnelle Codierung ist die Anwendung des Quadtree-Schemas im Vergleich zum uniformen Ansatz von Vorteil, es wird bessere BildqualitĂ€t bei kĂŒrzerer Codierungszeit erreicht

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
    corecore