66 research outputs found

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    Adaptive Fractal and Wavelet Image Denoising

    Get PDF
    The need for image enhancement and restoration is encountered in many practical applications. For instance, distortion due to additive white Gaussian noise (AWGN) can be caused by poor quality image acquisition, images observed in a noisy environment or noise inherent in communication channels. In this thesis, image denoising is investigated. After reviewing standard image denoising methods as applied in the spatial, frequency and wavelet domains of the noisy image, the thesis embarks on the endeavor of developing and experimenting with new image denoising methods based on fractal and wavelet transforms. In particular, three new image denoising methods are proposed: context-based wavelet thresholding, predictive fractal image denoising and fractal-wavelet image denoising. The proposed context-based thresholding strategy adopts localized hard and soft thresholding operators which take in consideration the content of an immediate neighborhood of a wavelet coefficient before thresholding it. The two fractal-based predictive schemes are based on a simple yet effective algorithm for estimating the fractal code of the original noise-free image from the noisy one. From this predicted code, one can then reconstruct a fractally denoised estimate of the original image. This fractal-based denoising algorithm can be applied in the pixel and the wavelet domains of the noisy image using standard fractal and fractal-wavelet schemes, respectively. Furthermore, the cycle spinning idea was implemented in order to enhance the quality of the fractally denoised estimates. Experimental results show that the proposed image denoising methods are competitive, or sometimes even compare favorably with the existing image denoising techniques reviewed in the thesis. This work broadens the application scope of fractal transforms, which have been used mainly for image coding and compression purposes

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Depth-first search embedded wavelet algorithm for hardware implementation

    Get PDF
    The emerging technology of image communication over wireless transmission channels requires several new challenges to be simultaneously met at the algorithm and architecture levels. At the algorithm level, desirable features include high coding performance, bit stream scalability, robustness to transmission errors and suitability for content-based coding schemes. At the architecture level, we require efficient architectures for construction of portable devices with small size and low power consumption. An important question is to ask if a single coding algorithm can be designed to meet the diverse requirements. Recently, researchers working on improving different features have converged on a set of coding schemes commonly known as embedded wavelet algorithms. Currently, these algorithms enjoy the highest coding performances reported in the literature. In addition, embedded wavelet algorithms have the natural feature of being able to meet a target bit rate precisely. Furthermore work on improving the algorithm robustness has shown much promise. The potential of embedded wavelet techniques has been acknowledged by its inclusion in the new JPEG2000 and MPEG-4 image and video coding standards

    A Scalable, Secure, and Energy-Efficient Image Representation for Wireless Systems

    Get PDF
    The recent growth in wireless communications presents a new challenge to multimedia communications. Digital image transmission is a very common form of multimedia communication. Due to limited bandwidth and broadcast nature of the wireless medium, it is necessary to compress and encrypt images before they are sent. On the other hand, it is important to efficiently utilize the limited energy in wireless devices. In a wireless device, two major sources of energy consumption are energy used for computation and energy used for transmission. Computation energy can be reduced by minimizing the time spent on compression and encryption. Transmission energy can be reduced by sending a smaller image file that is obtained by compressing the original highest quality image. Image quality is often sacrificed in the compression process. Therefore, users should have the flexibility to control the image quality to determine whether such a tradeoff is acceptable. It is also desirable for users to have control over image quality in different areas of the image so that less important areas can be compressed more, while retaining the details in important areas. To reduce computations for encryption, a partial encryption scheme can be employed to encrypt only the critical parts of an image file, without sacrificing security. This thesis proposes a scalable and secure image representation scheme that allows users to select different image quality and security levels. The binary space partitioning (BSP) tree presentation is selected because this representation allows convenient compression and scalable encryption. The Advanced Encryption Standard (AES) is chosen as the encryption algorithm because it is fast and secure. Our experimental result shows that our new tree construction method and our pruning formula reduces execution time, hence computation energy, by about 90%. Our image quality prediction model accurately predicts image quality to within 2-3dB of the actual image PSNR

    Fast Edge Preserving Fractal System

    Get PDF
    Electrical Engineerin

    Development of Some Efficient Lossless and Lossy Hybrid Image Compression Schemes

    Get PDF
    Digital imaging generates a large amount of data which needs to be compressed, without loss of relevant information, to economize storage space and allow speedy data transfer. Though both storage and transmission medium capacities have been continuously increasing over the last two decades, they dont match the present requirement. Many lossless and lossy image compression schemes exist for compression of images in space domain and transform domain. Employing more than one traditional image compression algorithms results in hybrid image compression techniques. Based on the existing schemes, novel hybrid image compression schemes are developed in this doctoral research work, to compress the images effectually maintaining the quality

    Multiscale Methods in Image Modelling and Image Processing

    Get PDF
    The field of modelling and processing of 'images' has fairly recently become important, even crucial, to areas of science, medicine, and engineering. The inevitable explosion of imaging modalities and approaches stemming from this fact has become a rich source of mathematical applications. 'Imaging' is quite broad, and suffers somewhat from this broadness. The general question of 'what is an image?' or perhaps 'what is a natural image?' turns out to be difficult to address. To make real headway one may need to strongly constrain the class of images being considered, as will be done in part of this thesis. On the other hand there are general principles that can guide research in many areas. One such principle considered is the assertion that (classes of) images have multiscale relationships, whether at a pixel level, between features, or other variants. There are both practical (in terms of computational complexity) and more philosophical reasons (mimicking the human visual system, for example) that suggest looking at such methods. Looking at scaling relationships may also have the advantage of opening a problem up to many mathematical tools. This thesis will detail two investigations into multiscale relationships, in quite different areas. One will involve Iterated Function Systems (IFS), and the other a stochastic approach to reconstruction of binary images (binary phase descriptions of porous media). The use of IFS in this context, which has often been called 'fractal image coding', has been primarily viewed as an image compression technique. We will re-visit this approach, proposing it as a more general tool. Some study of the implications of that idea will be presented, along with applications inferred by the results. In the area of reconstruction of binary porous media, a novel, multiscale, hierarchical annealing approach is proposed and investigated

    Data Encryption and Hashing Schemes for Multimedia Protection

    Get PDF
    There are millions of people using social networking sites like Facebook, Google+, and Youtube every single day across the entire world for sharing photos and other digital media. Unfortunately, sometimes people publish content that does not belong to them. As a result, there is an increasing demand for quality software capable of providing maximum protection for copyrighted material. In addition, confidential content such as medical images and patient records require high level of security so that they can be protected from unintended disclosure, when transferred over the Internet. On the other hand, decreasing the size of an image without significant loss in quality is always highly desirable. Hence, the need for efficient compression algorithms. This thesis introduces a robust method for image compression in the shearlet domain. Motivated by the outperformance of the Discrete Shearlet Transform (DST) compared to the Discrete Wavelet Transform (DWT) in encoding the directional information in images, we propose a DST-based compression algorithm that provides not only a better quality in terms of image approximation and compression ratio, but also increases the security of images via the Advanced Encryption Standard. Experimental results on a slew of medical images illustrate an improved performance in image quality of the proposed approximation approach in comparison to DWT, and also demonstrate its robustness against a variety of tests, including randomness, entropy, key sensitivity, and input sensitivity. We also present a 3D mesh hashing technique using spectral graph theory. The main idea is to partition a 3D model into sub-meshes, followed by the generation of the Laplace-Beltrami matrix of each sub-mesh, and the application of eigen-decomposition. This, in turn, is followed by the hashing of each sub-mesh using Tsallis entropy. The experimental results using a benchmark 3D models demonstrate the effectiveness of the proposed hashing scheme
    corecore