138 research outputs found

    Simple and fast subband de-blocking technique by discarding the high band signals

    Get PDF
    In this paper, we propose a simple and fast post-processing de-blocking technique to reduce blocking artifacts. The block-based coded image is first decomposed into several subbands. Only the low frequency subband signals are retained and the high frequency subband signals are discarded. The remaining subband signals are then reconstructed to obtain a less blocky image. The ideas are demonstrated by a cosine filter bank and a modulated sine filter bank. The simulation result shows that the proposed algorithm is effective in the reduction of blocking artifacts

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. In the proposed CODEC I, block-based disparity estimation/compensation (DE/DC) is performed in pixel domain. However, this results in an inefficiency when DWT is applied on the whole predictive error image that results from the DE process. This is because of the existence of artificial block boundaries between error blocks in the predictive error image. To overcome this problem, in the remaining proposed CODECs, DE/DC is performed in the wavelet domain. Due to the multiresolution nature of the wavelet domain, two methods of disparity estimation and compensation have been proposed. The first method is performing DEJDC in each subband of the lowest/coarsest resolution level and then propagating the disparity vectors obtained to the corresponding subbands of higher/finer resolution. Note that DE is not performed in every subband due to the high overhead bits that could be required for the coding of disparity vectors of all subbands. This method is being used in CODEC II. In the second method, DEJDC is performed m the wavelet-block domain. This enables disparity estimation to be performed m all subbands simultaneously without increasing the overhead bits required for the coding disparity vectors. This method is used by CODEC III. However, performing disparity estimation/compensation in all subbands would result in a significant improvement of CODEC III. To further improve the performance of CODEC ill, pioneering wavelet-block search technique is implemented in CODEC IV. The pioneering wavelet-block search technique enables the right/predicted image to be reconstructed at the decoder end without the need of transmitting the disparity vectors. In proposed CODEC V, pioneering block search is performed in all subbands of DWT decomposition which results in an improvement of its performance. Further, the CODEC IV and V are able to perform at very low bit rates(< 0.15 bpp). In CODEC VI and CODEC VII, Overlapped Block Disparity Compensation (OBDC) is used with & without the need of coding disparity vector. Our experiment results showed that no significant coding gains could be obtained for these CODECs over CODEC IV & V. All proposed CODECs m this thesis are wavelet-based stereo image coding algorithms that maximise the flexibility and benefits offered by wavelet transform technology when applied to stereo imaging. In addition the use of a baseline-JPEG coding architecture would enable the easy adaptation of the proposed algorithms within systems originally built for DCT-based coding. This is an important feature that would be useful during an era where DCT-based technology is only slowly being phased out to give way for DWT based compression technology. In addition, this thesis proposed a stereo image coding algorithm that uses JPEG-2000 technology as the basic compression engine. The proposed CODEC, named RASTER is a rate scalable stereo image CODEC that has a unique ability to preserve the image quality at binocular depth boundaries, which is an important requirement in the design of stereo image CODEC. The experimental results have shown that the proposed CODEC is able to achieve PSNR gains of up to 3.7 dB as compared to directly transmitting the right frame using JPEG-2000

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2-D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. [Continues.

    Distortion-constraint compression of three-dimensional CLSM images using image pyramid and vector quantization

    Get PDF
    The confocal microscopy imaging techniques, which allow optical sectioning, have been successfully exploited in biomedical studies. Biomedical scientists can benefit from more realistic visualization and much more accurate diagnosis by processing and analysing on a three-dimensional image data. The lack of efficient image compression standards makes such large volumetric image data slow to transfer over limited bandwidth networks. It also imposes large storage space requirements and high cost in archiving and maintenance. Conventional two-dimensional image coders do not take into account inter-frame correlations in three-dimensional image data. The standard multi-frame coders, like video coders, although they have good performance in capturing motion information, are not efficiently designed for coding multiple frames representing a stack of optical planes of a real object. Therefore a real three-dimensional image compression approach should be investigated. Moreover the reconstructed image quality is a very important concern in compressing medical images, because it could be directly related to the diagnosis accuracy. Most of the state-of-the-arts methods are based on transform coding, for instance JPEG is based on discrete-cosine-transform CDCT) and JPEG2000 is based on discrete- wavelet-transform (DWT). However in DCT and DWT methods, the control of the reconstructed image quality is inconvenient, involving considerable costs in computation, since they are fundamentally rate-parameterized methods rather than distortion-parameterized methods. Therefore it is very desirable to develop a transform-based distortion-parameterized compression method, which is expected to have high coding performance and also able to conveniently and accurately control the final distortion according to the user specified quality requirement. This thesis describes our work in developing a distortion-constraint three-dimensional image compression approach, using vector quantization techniques combined with image pyramid structures. We are expecting our method to have: 1. High coding performance in compressing three-dimensional microscopic image data, compared to the state-of-the-art three-dimensional image coders and other standardized two-dimensional image coders and video coders. 2. Distortion-control capability, which is a very desirable feature in medical 2. Distortion-control capability, which is a very desirable feature in medical image compression applications, is superior to the rate-parameterized methods in achieving a user specified quality requirement. The result is a three-dimensional image compression method, which has outstanding compression performance, measured objectively, for volumetric microscopic images. The distortion-constraint feature, by which users can expect to achieve a target image quality rather than the compressed file size, offers more flexible control of the reconstructed image quality than its rate-constraint counterparts in medical image applications. Additionally, it effectively reduces the artifacts presented in other approaches at low bit rates and also attenuates noise in the pre-compressed images. Furthermore, its advantages in progressive transmission and fast decoding make it suitable for bandwidth limited tele-communications and web-based image browsing applications

    Application of multirate digital signal processing to image compression

    Full text link
    With the increasing emphasis on digital communication and digital processing of images and video, image compression is drawing considerable interest as a means of reducing computer storage and communication channels bandwidth requirements. This thesis presents a method for the compression of grayscale images which is based on the multirate digital signal processing system. The input image spectrum is decomposed into octave wide subbands by critically resampling and filtering the image using separable FIR digital filters. These filters are chosen to satisfy the perfect reconstruction requirement. Simulation results on rectangularly sampled images (including a text image) are presented. Then, the algorithm is applied to the hexagonally resampled images and the results show a slight increase in the compression efficiency. Comparing the results against the standard (JPEG), indicate that this method does not have the blocking effect of JPEG and it preserves the edges even in the presence of high noise level

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    VHDL design and simulation for embedded zerotree wavelet quantisation

    Get PDF
    This thesis discusses a highly effective still image compression algorithm – The Embedded Zerotree Wavelets coding technique, as it is called. This technique is simple but achieves a remarkable result. The image is wavelet-transformed, symbolically coded and successive quantised, therefore the compression and transmission/storage saving can be achieved by utilising the structure of zerotree. The algorithm was first proposed by Jerome M. Shapiro in 1993, however to minimise the memory usage and speeding up the EZW processor, a Depth First Search method is used to transverse across the image rather than Breadth First Search method as initially discussed in Shapiro\u27s paper (Shapiro, 1993). The project\u27s primary objective is to simulate the EZW algorithm from a basic building block of 8 by 8 matrix to a well-known reference image such Lenna of 256 by 256 matrix. Hence the algorithm performance can be measured, for instance its peak signal to noise ratio can be calculated. The software environment used for the simulation is a Very-High Speed Integrated Circuits - Hardware Description Language such Peak VHDL, PC based version. This will lead to the second phase of the project. The secondary objective is to test the algorithm at a hardware level, such FPGA for a rapid prototype implementation only if the project time permits

    Application of Bandelet Transform in Image and Video Compression

    Get PDF
    The need for large-scale storage and transmission of data is growing exponentially With the widespread use of computers so that efficient ways of storing data have become important. With the advancement of technology, the world has found itself amid a vast amount of information. An efficient method has to be generated to deal with such amount of information. Data compression is a technique which minimizes the size of a file keeping the quality same as previous. So more amount of data can be stored in memory space with the help of data compression. There are various image compression standards such as JPEG, which uses discrete cosine transform technique and JPEG 2000 which uses discrete wavelet transform technique. The discrete cosine transform gives excellent compaction for highly correlated information. The computational complexity is very less as it has better information packing ability. However, it produces blocking artifacts, graininess, and blurring in the output which is overcome by the discrete wavelet transform. The image size is reduced by discarding values less than a prespecified quantity without losing much information. But it also has some limitations when the complexity of the image increases. Wavelets are optimal for point singularity however for line singularities and curve singularities these are not optimal. They do not consider the image geometry which is a vital source of redundancy. Here we analyze a new type of bases known as bandelets which can be constructed from the wavelet basis which takes an important source of regularity that is the geometrical redundancy.The image is decomposed along the direction of geometry. It is better as compared to other methods because the geometry is described by a flow vector rather than edges. it indicates the direction in which the intensity of image shows a smooth variation. It gives better compression measure compared to wavelet bases. A fast subband coding is used for the image decomposition in a bandelet basis. It has been extended for video compression. The bandelet transform based image and video compression method compared with the corresponding wavelet scheme. Different performance measure parameters such as peak signal to noise ratio, compression ratio (PSNR), bits per pixel (bpp) and entropy are evaluated for both Image and video compression

    Méthodes hybrides pour la compression d'image

    Get PDF
    Abstract : The storage and transmission of images is the basis of digital electronic communication. In order to communicate a maximum amount of information in a given period of time, one needs to look for efficient ways to represent the information communicated. Designing optimal representations is the subject of data compression. In this work, the compression methods consist of two steps in general, which are encoding and decoding. During encoding, one expresses the image by less data than the original and stores the data information; during decoding, one decodes the compressed data to show the decompressed image. In Chapter 1, we review some basic compression methods which are important in understanding the concepts of encoding and information theory as tools to build compression models and measure their efficiency. Further on, we focus on transform methods for compression, particularly we discuss in details Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). We also analyse the hybrid method which combines DCT and DWT together to compress image data. For the sake of comparison, we discuss another total different method which is fractal image compression that compresses image data by taking advantage of self-similarity of images. We propose the hybrid method of fractal image compression and DCT based on their characteristic. Several experimental results are provided to show the outcome of the comparison between the discussed methods. This allows us to conclude that the hybrid method performs more efficiently and offers a relatively good quality of compressed image than some particular methods, but also there is some improvement can be made in the future.Le stockage et la transmission d'images sont à la base de la communication électronique numérique. Afin de communiquer un maximum d'informations dans un laps de temps donné, il faut rechercher des moyens efficaces de représenter les informations communiquées. L'objectif de base de la compression de données est la conception d'algorithmes qui permettent des représentations optimales des données. Dans ce travail, les méthodes de compression consistent en deux étapes en général, qui sont l'encodage et le décodage. Lors du codage, on exprime l'image par moins de données que l'image originale et stocke les informations obtenues; lors du décodage, on décode les données compressées pour montrer l'image décompressée. Dans le chapitre 1, nous passons en revue quelques méthodes de compression de base qui sont importantes pour comprendre les concepts d'encodage et de théorie de l'information en tant qu'outils pour construire des modèles de compression et mesurer leur efficacité. Plus loin, nous nous concentrons sur les méthodes de transformation pour la compression, en particulier nous discutons en détail des méthodes de transformée en cosinus discrète (DCT) et Transformée en ondelettes discrète (DWT). Nous analysons également la méthode hybride qui combine DCT et DWT pour compresser les données d'image. À des fins de comparaison, nous discutons d'une autre méthode totalement différente qui est la compression d'image fractale qui comprime les données d'image en tirant partie de l'autosimilarité des images. Nous proposons la méthode hybride de compression d'image fractale et DCT en fonction de leurs caractéristiques. Plusieurs résultats expérimentaux sont fournis pour montrer le résultat de la comparaison entre les méthodes discutées. Cela nous permet de conclure que la méthode hybride fonctionne plus efficacement et offre une qualité d'image compressée relativement meilleure que certaines méthodes, mais il y a aussi des améliorations qui peuvent être apportées à l'avenir

    Digital image compression

    Get PDF
    corecore