378 research outputs found

    Efficient Coding of Shape and Transparency for Video Objects

    Get PDF

    The contour tree image encoding technique and file format

    Get PDF
    The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.Science and Engineering Research Counci

    Context-based coding of bilevel images enhanced by digital straight line analysis

    Get PDF

    Band Ordering in Lossless Compression of Multispectral Images

    Get PDF
    In this paper, we consider a model of lossless image compression in which each band of a multispectral image is coded using a prediction function involving values from a previously coded band of the compression, and examine how the ordering of the bands affects the achievable compression. We present an efficient algorithm for computing the optimal band ordering for a multispectral image. This algorithm has time complexity O(n2) for an n-band image, while the naive algorithm takes time &#x03A9(n!). A slight variant of the optimal ordering problem that is motivated by some practical concerns is shown to be NP-hard, and hence, computationally infeasible, in all cases except for the most trivial possibility. In addition, we report on our experimental findings using the algorithms designed in this paper applied to real multispectral satellite data. The results show that the techniques described here hold great promise for application to real-world compression needs

    The CCSDS 123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression Standard: A comprehensive review

    Get PDF
    The Consultative Committee for Space Data Systems (CCSDS) published the CCSDS 123.0-B-2, “Low- Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression” standard. This standard extends the previous issue, CCSDS 123.0-B-1, which supported only lossless compression, while maintaining backward compatibility. The main novelty of the new issue is support for near-lossless compression, i.e., lossy compression with user-defined absolute and/or relative error limits in the reconstructed images. This new feature is achieved via closed-loop quantization of prediction errors. Two further additions arise from the new near lossless support: first, the calculation of predicted sample values using sample representatives that may not be equal to the reconstructed sample values, and, second, a new hybrid entropy coder designed to provide enhanced compression performance for low-entropy data, prevalent when non lossless compression is used. These new features enable significantly smaller compressed data volumes than those achievable with CCSDS 123.0-B-1 while controlling the quality of the decompressed images. As a result, larger amounts of valuable information can be retrieved given a set of bandwidth and energy consumption constraints

    Depth-based Multi-View 3D Video Coding

    Get PDF

    Distributed single source coding with side information

    Full text link

    A Universal Scheme for Wyner–Ziv Coding of Discrete Sources

    Get PDF
    We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes

    Codage de cartes de profondeur par deformation de courbes elastiques

    Get PDF
    In multiple-view video plus depth, depth maps can be represented by means of grayscale images and the corresponding temporal sequence can be thought as a standard grayscale video sequence. However depth maps have different properties from natural images: they present large areas of smooth surfaces separated by sharp edges. Arguably the most important information lies in object contours, as a consequence an interesting approach consists in performing a lossless coding of the contour map, possibly followed by a lossy coding of per-object depth values.In this context, we propose a new technique for the lossless coding of object contours, based on the elastic deformation of curves. A continuous evolution of elastic deformations between two reference contour curves can be modelled, and an elastically deformed version of the reference contours can be sent to the decoder with an extremely small coding cost and used as side information to improve the lossless coding of the actual contour. After the main discontinuities have been captured by the contour description, the depth field inside each region is rather smooth. We proposed and tested two different techniques for the coding of the depth field inside each region. The first technique performs the shape-adaptive wavelet transform followed by the shape-adaptive version of SPIHT. The second technique performs a prediction of the depth field from its subsampled version and the set of coded contours. It is generally recognized that a high quality view rendering at the receiver side is possible only by preserving the contour information, since distortions on edges during the encoding step would cause a sensible degradation on the synthesized view and on the 3D perception. We investigated this claim by conducting a subjective quality assessment test to compare an object-based technique and a hybrid block-based techniques for the coding of depth maps.Dans le format multiple-view video plus depth, les cartes de profondeur peuvent ĂȘtre reprĂ©sentĂ©es comme des images en niveaux de gris et la sĂ©quence temporelle correspondante peut ĂȘtre considĂ©rĂ©e comme une sĂ©quence vidĂ©o standard en niveaux de gris. Cependant les cartes de profondeur ont des propriĂ©tĂ©s diffĂ©rentes des images naturelles: ils prĂ©sentent de grandes surfaces lisses sĂ©parĂ©es par des arĂȘtes vives. On peut dire que l'information la plus importante rĂ©side dans les contours de l'objet, en consĂ©quence une approche intĂ©ressante consiste Ă  effectuer un codage sans perte de la carte de contour, Ă©ventuellement suivie d'un codage lossy des valeurs de profondeur par-objet.Dans ce contexte, nous proposons une nouvelle technique pour le codage sans perte des contours de l'objet, basĂ©e sur la dĂ©formation Ă©lastique des courbes. Une Ă©volution continue des dĂ©formations Ă©lastiques peut ĂȘtre modĂ©lisĂ©e entre deux courbes de rĂ©fĂ©rence, et une version du contour dĂ©formĂ©e Ă©lastiquement peut ĂȘtre envoyĂ© au dĂ©codeur avec un coĂ»t de codage trĂšs faible et utilisĂ© comme information latĂ©rale pour amĂ©liorer le codage sans perte du contour rĂ©el. AprĂšs que les principales discontinuitĂ©s ont Ă©tĂ© capturĂ©s par la description du contour, la profondeur Ă  l'intĂ©rieur de chaque rĂ©gion est assez lisse. Nous avons proposĂ© et testĂ© deux techniques diffĂ©rentes pour le codage du champ de profondeur Ă  l'intĂ©rieur de chaque rĂ©gion. La premiĂšre technique utilise la version adaptative Ă  la forme de la transformation en ondelette, suivie par la version adaptative Ă  la forme de SPIHT.La seconde technique effectue une prĂ©diction du champ de profondeur Ă  partir de sa version sous-Ă©chantillonnĂ©e et l'ensemble des contours codĂ©s. Il est gĂ©nĂ©ralement reconnu qu'un rendu de haute qualitĂ© au rĂ©cepteur pour un nouveau point de vue est possible que avec la prĂ©servation de l'information de contour, car des distorsions sur les bords lors de l'Ă©tape de codage entraĂźnerait une dĂ©gradation Ă©vidente sur la vue synthĂ©tisĂ©e et sur la perception 3D. Nous avons Ă©tudiĂ© cette affirmation en effectuant un test d'Ă©valuation de la qualitĂ© perçue en comparant, pour le codage des cartes de profondeur, une technique basĂ©e sur la compression d'objects et une techniques de codage vidĂ©o hybride Ă  blocs

    Efficient Algorithms for Large-Scale Image Analysis

    Get PDF
    This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications
    • 

    corecore