179 research outputs found

    Analysis and Comparison of Digital Image Compression Algorithms Combined with Huffman Encoding

    Get PDF
    SPIHT (Set Partitioning In Hierarchical Tree) has become the most effective image compression tool computationally in no time  among all the other algorithms, because it boosts the operating potency, reduces its complexness, gets implemented in code and hardware simply. In this paper, a special approach to the initial SPIHT algorithm that relies on Set Partitioning in Row/Column-wise (SPIR) rule has been proposed and compared to EZW method. This rule is well implementable compared to the BP-SPIHT (Block-based pass parallel SPIHT algorithm) and alternative compression techniques. This algorithm applies on wavelet decomposed image, followed by verification of the row/column wise constituent values. Output bit stream of SPIR encoding rule, combined with Huffman encoding, presents a simple and effective methodology

    Balancing Compression and Encryption of Satellite Imagery

    Get PDF
    With the rapid developments in the remote sensing technologies and services, there is a necessity for combined compression and encryption of satellite imagery. The onboard satellite compression is used to minimize storage and communication bandwidth requirements of high data rate satellite applications. While encryption is employed to secure these resources and prevent illegal use of image sensitive information. In this paper, we propose an approach to address these challenges which raised in the highly dynamic satellite based networked environment. This approach combined compression algorithms (Huffman and SPIHT) and encryptions algorithms (RC4, blowfish and AES) into three complementary modes: (1) secure lossless compression, (2) secure lossy compression and (3) secure hybrid compression. The extensive experiments on the 126 satellite images dataset showed that our approach outperforms traditional and state of art approaches by saving approximately (53%) of computational resources. In addition, the interesting feature of this approach is these three options that mimic reality by imposing every time a different approach to deal with the problem of limited computing and communication resources

    Survey of Hybrid Image Compression Techniques

    Get PDF
    A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method

    Significant medical image compression techniques: a review

    Get PDF
    Telemedicine applications allow the patient and doctor to communicate with each other through network services. Several medical image compression techniques have been suggested by researchers in the past years. This review paper offers a comparison of the algorithms and the performance by analysing three factors that influence the choice of compression algorithm, which are image quality, compression ratio, and compression speed. The results of previous research have shown that there is a need for effective algorithms for medical imaging without data loss, which is why the lossless compression process is used to compress medical records. Lossless compression, however, has minimal compression ratio efficiency. The way to get the optimum compression ratio is by segmentation of the image into region of interest (ROI) and non-ROI zones, where the power and time needed can be minimised due to the smaller scale. Recently, several researchers have been attempting to create hybrid compression algorithms by integrating different compression techniques to increase the efficiency of compression algorithms

    Compression of MRI brain images based on automatic extraction of tumor region

    Get PDF
    In the compression of medical images, region of interest (ROI) based techniques seem to be promising, as they can result in high compression ratios while maintaining the quality of region of diagnostic importance, the ROI, when image is reconstructed. In this article, we propose a set-up for compression of brain magnetic resonance imaging (MRI) images based on automatic extraction of tumor. Our approach is to first separate the tumor, the ROI in our case, from brain image, using support vector machine (SVM) classification and region extraction step. Then, tumor region (ROI) is compressed using Arithmetic coding, a lossless compression technique. The non-tumorous region, non-region of interest (NROI), is compressed using a lossy compression technique formed by a combination of discrete wavelet transform (DWT), set partitioning in hierarchical trees (SPIHT) and arithmetic coding (AC). The classification performance parameters, like, dice coefficient, sensitivity, positive predictive value and accuracy are tabulated. In the case of compression, we report, performance parameters like mean square error and peak signal to noise ratio for a given set of bits per pixel (bpp) values. We found that the compression scheme considered in our setup gives promising results as compared to other schemes

    Ultrafast and Efficient Scalable Image Compression Algorithm

    Get PDF
    Wavelet-based image compression algorithms have good performance and produce a rate scalable bitstream that can be decoded efficiently at several bit rates. Unfortunately, the discrete wavelet transform (DWT) has relatively high computational complexity. On the other hand, the discrete cosine transform (DCT) has low complexity and excellent compaction properties. Unfortunately, it is non-local, which necessitates implementing it as a block-based transform leading to the well-known blocking artifacts at the edges of the DCT blocks. This paper proposes a very fast and rate scalable algorithm that exploits the low complexity of DCT and the low complexity of the set partitioning technique used by the wavelet-based algorithms. Like JPEG, the proposed algorithm first transforms the image using block-based DCT. Then, it rearranges the DCT coefficients into a wavelet-like structure. Finally, the rearranged image is coded using a modified version of the SPECK algorithm, which is one of the best well-known wavelet-based algorithms. The modified SPECK consumes slightly less computer memory, has slightly lower complexity and slightly better performance than the original SPECK. The experimental results demonstrated that the proposed algorithm has competitive performance and high processing speed. Consequently, it has the best performance to complexity ratio among all the current rate scalable algorithms
    corecore