105 research outputs found

    Image compression with anisotropic diffusion

    Get PDF
    Compression is an important field of digital image processing where well-engineered methods with high performance exist. Partial differential equations (PDEs), however, have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion. Although this anisotropic diffusion equation with a diffusion tensor was originally proposed for image denoising, we show that it outperforms many other PDEs when sparse scattered data must be interpolated. To exploit this property for image compression, we consider an adaptive triangulation method for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the diffusion process. They can be coded in a compact way that reflects the B-tree structure of the triangulation. We supplement the coding step with a number of amendments such as error threshold adaptation, diffusion-based point selection, and specific quantisation strategies. Our experiments illustrate the usefulness of each of these modifications. They demonstrate that for high compression rates, our PDE-based approach does not only give far better results than the widely-used JPEG standard, but can even come close to the quality of the highly optimised JPEG2000 codec

    Position Based Coding Scheme and Huffman Coding in JPEG2000: An Experimental analysis

    Get PDF
    Abstract-The paper compares the novel method of position based coding scheme introduced recently by the authors with Huffman coding results. The results show that Position Based Coding Scheme (PBCS) is superior in terms of image compression ratio and PSNR. In PBCS, by identifying the unique elements and by reducing redundancies the coding has been performed. The results of JPEG2000 image compression with Huffman coding and the JPEG2000 based on PBCS are then compared. The results show that the PBCS has better compression ratio with higher PSNR and better image quality. The study, which can be considered as a logical extension of the image transformation matrix, applies statistical tools to achieve the novel coding scheme as a direct extension to wavelet based image compression. The coding scheme can highly economise the bandwidth without compromising on picture quality; invariant to the existing compression standards and lossy as well as lossless compressions which offers possibility for wide ranging applications

    Virtually Lossless Compression of Astrophysical Images

    Get PDF
    We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM) scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.). The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community

    Algorithms and Architectures for Secure Embedded Multimedia Systems

    Get PDF
    Embedded multimedia systems provide real-time video support for applications in entertainment (mobile phones, internet video websites), defense (video-surveillance and tracking) and public-domain (tele-medicine, remote and distant learning, traffic monitoring and management). With the widespread deployment of such real-time embedded systems, there has been an increasing concern over the security and authentication of concerned multimedia data. While several (software) algorithms and hardware architectures have been proposed in the research literature to support multimedia security, these fail to address embedded applications whose performance specifications have tighter constraints on computational power and available hardware resources. The goals of this dissertation research are two fold: 1. To develop novel algorithms for joint video compression and encryption. The proposed algorithms reduce the computational requirements of multimedia encryption algorithms. We propose an approach that uses the compression parameters instead of compressed bitstream for video encryption. 2. Hardware acceleration of proposed algorithms over reconfigurable computing platforms such as FPGA and over VLSI circuits. We use signal processing knowledge to make the algorithms suitable for hardware optimizations and try to reduce the critical path of circuits using hardware-specific optimizations. The proposed algorithms ensures a considerable level of security for low-power embedded systems such as portable video players and surveillance cameras. These schemes have zero or little compression losses and preserve the desired properties of compressed bitstream in encrypted bitstream to ensure secure and scalable transmission of videos over heterogeneous networks. They also support indexing, search and retrieval in secure multimedia digital libraries. This property is crucial not only for police and armed forces to retrieve information about a suspect from a large video database of surveillance feeds, but extremely helpful for data centers (such as those used by youtube, aol and metacafe) in reducing the computation cost in search and retrieval of desired videos

    Wavelet-based image compression for mobile applications.

    Get PDF
    The transmission of digital colour images is rapidly becoming popular on mobile telephones, Personal Digital Assistant (PDA) technology and other wireless based image services. However, transmitting digital colour images via mobile devices is badly affected by low air bandwidth. Advances in communications Channels (example 3G communication network) go some way to addressing this problem but the rapid increase in traffic and demand for ever better quality images, means that effective data compression techniques are essential for transmitting and storing digital images. The main objective of this thesis is to offer a novel image compression technique that can help to overcome the bandwidth problem. This thesis has investigated and implemented three different wavelet-based compression schemes with a focus on a suitable compression method for mobile applications. The first described algorithm is a dual wavelet compression algorithm, which is a modified conventional wavelet compression method. The algorithm uses different wavelet filters to decompose the luminance and chrominance components separately. In addition, different levels of decomposition can also be applied to each component separately. The second algorithm is segmented wavelet-based, which segments an image into its smooth and nonsmooth parts. Different wavelet filters are then applied to the segmented parts of the image. Finally, the third algorithm is the hybrid wavelet-based compression System (HWCS), where the subject of interest is cropped and is then compressed using a wavelet-based method. The details of the background are reduced by averaging it and sending the background separately from the compressed subject of interest. The final image is reconstructed by replacing the averaged background image pixels with the compressed cropped image. For each algorithm the experimental results presented in this thesis clearly demonstrated that encoder output can be effectively reduced while maintaining an acceptable image visual quality particularly when compared to a conventional wavelet-based compression scheme

    Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    Get PDF

    SPIHT image coding : analysis, improvements and applications.

    Get PDF
    Image compression plays an important role in image storage and transmission. In the popular Internet applications and mobile communications, image coding is required to be not only efficient but also scalable. Recent wavelet techniques provide a way for efficient and scalable image coding. SPIHT (set partitioning in hierarchical trees) is such an algorithm based on wavelet transform. This thesis analyses and improves the SPIHT algorithm. The preliminary part of the thesis investigates two-dimensional multi-resolution decomposition for image coding using the wavelet transform, which is reviewed and analysed systematically. The wavelet transform is implemented using filter banks, and the z-domain proofs are given for the key implementation steps. A scheme of wavelet transform for arbitrarily sized images is proposed. The statistical properties of the wavelet coefficients (being the output of the wavelet transform) are explored for natural images. The energy in the transform domain is localised and highly concentrated on the low-resolution subband. The wavelet coefficients are DC-biased, and the gravity centre of most octave-segmented value sections (which are relevant to the binary bit-planes) is offset by approximately one eighth of the section range from the geometrical centre. The intra-subband correlation coefficients are the largest, followed by the inter-level correlation coefficients in the middle then the trivial inter-subband correlation coefficients on the same resolution level. The statistical properties reveal the success of the SPIHT algorithm, and lead to further improvements. The subsequent parts of the thesis examine the SPIHT algorithm. The concepts of successive approximation quantisation and ordered bit-plane coding are highlighted. The procedure of SPIHT image coding is demonstrated with a simple example. A solution for arbitrarily sized images is proposed. Seven measures are proposed to improve the SPIHT algorithm. Three DC-level shifting schemes are discussed, and the one subtracting the geometrical centre in the image domain is selected in the thesis. The virtual trees are introduced to hold more wavelet coefficients in each of the initial sets. A scheme is proposed to reduce the redundancy in the coding bit-stream by omitting the predictable symbols. The quantisation of wavelet coefficients is offset by one eighth from the geometrical centre. A pre-processing technique is proposed to speed up the judgement of the significance of trees, and a smoothing is imposed on the magnitude of the wavelet coefficients during the pre-processing for lossy image coding. The optimisation of arithmetic coding is also discussed. Experimental results show that these improvements to SPIHT get a significant performance gain. The running time is reduced by up to a half. The PSNR (peak signal to noise ratio) is improved a lot at very low bit rates, up to 12 dB in the extreme case. Moderate improvements are also made at high bit rates. The SPIHT algorithm is applied to loss less image coding. Various wavelet transforms are evaluated for lossless SPIHT image coding. Experimental results show that the interpolating transform (4, 4) and the S+P transform (2+2, 2) are the best for natural images among the transforms used, the interpolating transform (4, 2) is the best for CT images, and the bi-orthogonal transform (9, 7) is always the worst. Content-based lossless coding of a CT head image is presented in the thesis, using segmentation and SPIHT. Although the performance gain is limited in the experiments, it shows the potential advantage of content-based image coding
    corecore