152 research outputs found

    Perceptual lossless medical image coding

    Get PDF
    A novel perceptually lossless coder is presented for the compression of medical images. Built on the JPEG 2000 coding framework, the heart of the proposed coder is a visual pruning function, embedded with an advanced human vision model to identify and to remove visually insignificant/irrelevant information. The proposed coder offers the advantages of simplicity and modularity with bit-stream compliance. Current results have shown superior compression ratio gains over that of its information lossless counterparts without any visible distortion. In addition, a case study consisting of 31 medical experts has shown that no perceivable difference of statistical significance exists between the original images and the images compressed by the proposed coder

    A human visual system based image coder

    Get PDF
    Over the years, society has changed considerably due to technological changes, and digital images have become part and parcel of our everyday lives. Irrespective of applications (i.e., digital camera) and services (information sharing, e.g., Youtube, archive / storage), there is the need for high image quality with high compression ratios. Hence, considerable efforts have been invested in the area of image compression. The traditional image compression systems take into account of statistical redundancies inherent in the image data. However, the development and adaptation of vision models, which take into account the properties of the human visual system (HVS), into picture coders have since shown promising results. The objective of the thesis is to propose the implementation of a vision model in two different manners in the JPEG2000 coding system: (a) a Perceptual Colour Distortion Measure (PCDM) for colour images in the encoding stage, and (b) a Perceptual Post Filtering (PPF) algorithm for colour images in the decoding stage. Both implementations are embedded into the JPEG2000 coder. The vision model here exploits the contrast sensitivity, the inter-orientation masking and intra-band masking visual properties of the HVS. Extensive calibration work has been undertaken to fine-tune the 42 model parameters of the PCDM and Just-Noticeable-Difference thresholds of the PPF for colour images. Evaluation with subjective assessments of PCDM based coder has shown perceived quality improvement over the JPEG2000 benchmark with the MSE (mean square error) and CVIS criteria. For the PPF adapted JPEG2000 decoder, performance evaluation has also shown promising results against the JPEG2000 benchmarks. Based on subjective evaluation, when both PCDM and PPF are used in the JPEG2000 coding system, the overall perceived image quality is superior to the stand-alone JPEG2000 with the PCDM

    ROI Based Quality Access Control of Compressed Color Image using DWT via Lifting

    Get PDF
    Region-of-Interest (ROI) in an image or video signal contains important information and may be used for access control at various qualities using multiresolution analysis (MRA). This paper proposes a novel quality access control method of compressed color image by modulating the coefficients of ROI at various levels. Data modulation causes visual degradation in the original image and plays the key role in access control through reversible process. The modulation information, in the form of a secret key, is embedded in non-ROI part of the chrominance blue (Cb) channel of the color image using quantization index modulation (QIM). Lifting based DWT, rather than conventional DWT, is used to decompose the original image in order to achieve two-fold advantages, namely (1) better flexibility and low loss in image quality due to QIM and (2) better decoding reliability that leads to better access control. Only the authorized users having the full knowledge of the secret key restore the full quality of ROI. Simulation results duly support this claims

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Towards Optimal Copyright Protection Using Neural Networks Based Digital Image Watermarking

    Get PDF
    In the field of digital watermarking, digital image watermarking for copyright protection has attracted a lot of attention in the research community. Digital watermarking contains varies techniques for protecting the digital content. Among all those techniques,Discrete Wavelet Transform (DWT) provides higher image imperceptibility and robustness. Over the years, researchers have been designing watermarking techniques with robustness in mind, in order for the watermark to be resistant against any image processing techniques. Furthermore, the requirements of a good watermarking technique includes a tradeoff between robustness, image quality (imperceptibility) and capacity. In this paper, we have done an extensive literature review for the existing DWT techniques and those combined with other techniques such as Neural Networks. In addition to that, we have discuss the contribution of Neural Networks in copyright protection. Finally we reached our goal in which we identified the research gaps existed in the current watermarking schemes. So that, it will be easily to obtain an optimal techniques to make the watermark object robust to attacks while maintaining the imperceptibility to enhance the copyright protection

    Wavelet-Based Audio Embedding & Audio/Video Compression

    Get PDF
    With the decline in military spending, the United States relies heavily on state side support. Communications has never been more important. High-quality audio and video capabilities are a must. Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several highly effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit plane coding, first difference coding, and Huffman coding. To demonstrate the potential of this audio embedding audio/video compression system, an audio signal is embedded into a video signal and the combined signal is compressed. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33dB. Finally, the audio signal is extracted with out error

    A DWT based perceptual video coding framework: concepts, issues and techniques

    Get PDF
    The work in this thesis explore the DWT based video coding by the introduction of a novel DWT (Discrete Wavelet Transform) / MC (Motion Compensation) / DPCM (Differential Pulse Code Modulation) video coding framework, which adopts the EBCOT as the coding engine for both the intra- and the inter-frame coder. The adaptive switching mechanism between the frame/field coding modes is investigated for this coding framework. The Low-Band-Shift (LBS) is employed for the MC in the DWT domain. The LBS based MC is proven to provide consistent improvement on the Peak Signal-to-Noise Ratio (PSNR) of the coded video over the simple Wavelet Tree (WT) based MC. The Adaptive Arithmetic Coding (AAC) is adopted to code the motion information. The context set of the Adaptive Binary Arithmetic Coding (ABAC) for the inter-frame data is redesigned based on the statistical analysis. To further improve the perceived picture quality, a Perceptual Distortion Measure (PDM) based on human vision model is used for the EBCOT of the intra-frame coder. A visibility assessment of the quantization error of various subbands in the DWT domain is performed through subjective tests. In summary, all these findings have solved the issues originated from the proposed perceptual video coding framework. They include: a working DWT/MC/DPCM video coding framework with superior coding efficiency on sequences with translational or head-shoulder motion; an adaptive switching mechanism between frame and field coding mode; an effective LBS based MC scheme in the DWT domain; a methodology of the context design for entropy coding of the inter-frame data; a PDM which replaces the MSE inside the EBCOT coding engine for the intra-frame coder, which provides improvement on the perceived quality of intra-frames; a visibility assessment to the quantization errors in the DWT domain
    corecore