133 research outputs found

    Image coding using wavelet transform and adaptive block truncation coding

    Get PDF
    This thesis presents a new image coding using wavelet transform and adaptive block truncation coding. Images are first pre-processed by the wavelet transform and then coded by the adaptive block truncation coding. Algorithms for both monochrome and color images are proposed and experimentally studied. The adaptive block truncation coding is also modified to achieve better performance. For coding monochrome images at the bit-rate region between 0.8 to 1.2 bits/pixel, the performance of the new coding is comparable to the ones of subband codings and other image codings using the wavelet transform; however, the new coding offers less computational load. The new coding also gives a good reconstruction of a color image at the bit-rate of 1.0 bit/pixel. The comparison between the new coding and the original adaptive block truncation coding is also given. The discussion on effects of a filter and a number of decomposition levels used for an implementation of the wavelet transform is included in this thesis, as well

    Data analysis for image transmitted using Discrete Wavelet Transform and Vector Quantization compression

    Get PDF
    In this paper we are going to study the effect of channel noise in image compressed with vector quantization and discrete wavelet transform. The objective of this study is to analyze and understand the way that the noise attack transmitted data by doing lot of tests like dividing the indices in different levels according to discrete wavelet transform and dividing  each level in frames of bits. The collected information well helps us to propose solutions to make the received image more resistible to the channel noise also to benefit from the good representation obtained by using vector quantization and discrete wavelet transform

    Image compression techniques using vector quantization

    Get PDF

    Digital image compression

    Get PDF

    SAR IMAGE COMPRESSION USING ADAPTIVE DIFFERENTIAL EVOLUTION AND PATTERN SEARCH BASED K-MEANS VECTOR QUANTIZATION

    Get PDF
    A novel Vector Quantization (VQ) technique for encoding the Bi-orthogonal wavelet decomposed image using hybrid Adaptive Differential Evolution (ADE) and a Pattern Search optimization algorithm (hADEPS) is proposed. ADE is a modified version of Differential Evolution (DE) in which mutation operation is made adaptive based on the ascending/descending objective function or fitness value and tested on twelve numerical benchmark functions and the results are compared and proved better than Genetic Algorithm (GA), ordinary DE and FA. ADE is a global optimizer which explore the global search space and PS is local optimizer which exploit a local search space, so ADE is hybridized with PS. In the proposed VQ, in a codebook of codewords, 62.5% of codewords are assigned and optimized for the approximation coefficients and the remaining 37.5% are equally assigned to horizontal, vertical and diagonal coefficients. The superiority of proposed hybrid Adaptive Differential Evolution and Pattern Search (hADE-PS) optimized vector quantization over DE is demonstrated. The proposed technique is compared with DE based VQ and ADE based quantization and with standard LBG algorithm. Results show higher Peak Signal-to-Noise Ratio (PSNR) and Structural Similiraty Index Measure (SSIM) indicating better reconstruction

    Compression of an ECG Signal Using Mixed Transforms

    Get PDF
    Electrocardiogram (ECG) is an important physiological signal for cardiac disease diagnosis. With the increasing use of modern electrocardiogram monitoring devices that generate vast amount of data requiring huge storage capacity. In order to decrease storage costs or make ECG signals suitable and ready for transmission through common communication channels, the ECG data volume must be reduced. So an effective data compression method is required. This paper presents an efficient technique for the compression of ECG signals. In this technique, different transforms have been used to compress the ECG signals. At first, a 1-D ECG data was segmented and aligned to a 2-D data array, then 2-D mixed transform was implemented to compress the ECG data in the 2- D form. The compression algorithms were implemented and tested using multiwavelet, wavelet and slantlet transforms to form the proposed method based on mixed transforms. Then vector quantization technique was employed to extract the mixed transform coefficients. Some selected records from MIT/BIH arrhythmia database were tested contrastively and the performance of the proposed methods was analyzed and evaluated using MATLAB package. Simulation results showed that the proposed methods gave a high compression ratio (CR) for the ECG signals comparing with other available methods. For example, the compression of one record (record 100) yielded CR of 24.4 associated with percent root mean square difference (PRD) of 2.56% was achieved

    A State Table SPHIT Approach for Modified Curvelet-based Medical Image Compression

    Get PDF
    Medical imaging plays a significant role in clinical practice. Storing and transferring a large volume of images can be complex and inefficient. This paper presents the development of a new compression technique that combines the fast discrete curvelet transform (FDCvT) with state table set partitioning in the hierarchical trees (STS) encoding scheme. The curvelet transform is an extension of the wavelet transform algorithm that represents data based on scale and position. Initially, the medical image was decomposed using the FDCvT algorithm. The FDCvT algorithm creates symmetrical values for the detail coefficients, and these coefficients are modified to improve the efficiency of the algorithm. The curvelet coefficients are then encoded using the STS and differential pulse-code modulation (DPCM). The greatest amount of energy is contained in the coarse coefficients, which are encoded using the DPCM method. The finest and modified detail coefficients are encoded using the STS method. A variety of medical modalities, including computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI), are used to verify the performance of the proposed technique. Various quality metrics, including peak signal-to-noise ratio (PSNR), compression ratio (CR), and structural similarity index (SSIM), are used to evaluate the compression results. Additionally, the computation time for the encoding (ET) and decoding (DT) processes is measured. The experimental results showed that the PET image obtained higher values of the PSNR and CR. The CT image provides high quality for the reconstructed image, with an SSIM value of 0.96 and the fastest ET of 0.13 seconds. The MRI image has the shortest DT, which is 0.23 seconds

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression

    Dimension reduction of image and audio space

    Full text link
    The reduction of data necessary for storage or transmission is a desirable goal in the digital video and audio domain. Compression schemes strive to reduce the amount of storage space or bandwidth necessary to keep or move the data. Data reduction can be accomplished so that visually or audibly unnecessary data is removed or recoded thus aiding the compression phase of the data processing. The characterization and identification of data that can be successfully removed or reduced is the purpose of this work. New philosophy, theory and methods for data processing are presented towards the goal of data reduction. The philosophy and theory developed in this work establish a foundation for high speed data reduction suitable for multi-media applications. The developed methods encompass motion detection and edge detection as features of the systems. The philosophy of energy flow analysis in video processing enables the consideration of noise in digital video data. Research into noise versus motion leads to an efficient and successful method of identifying motion in a sequence. The research of the underlying statistical properties of vector quantization provides an insight into the performance characteristics of vector quantization and leads to successful improvements in application. The underlying statistical properties of the vector quantization process are analyzed and three theorems are developed and proved. The theorems establish the statistical distributions and probability densities of various metrics of the vector quantization process. From these properties, an intelligent and efficient algorithm design is developed and tested. The performance improvements in both time and quality are established through algorithm analysis and empirical testing. The empirical results are presented
    corecore