4,032 research outputs found

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Automatic epilepsy detection using fractal dimensions segmentation and GP-SVM classification

    Get PDF
    Objective: The most important part of signal processing for classification is feature extraction as a mapping from original input electroencephalographic (EEG) data space to new features space with the biggest class separability value. Features are not only the most important, but also the most difficult task from the classification process as they define input data and classification quality. An ideal set of features would make the classification problem trivial. This article presents novel methods of feature extraction processing and automatic epilepsy seizure classification combining machine learning methods with genetic evolution algorithms. Methods: Classification is performed on EEG data that represent electric brain activity. At first, the signal is preprocessed with digital filtration and adaptive segmentation using fractal dimensions as the only segmentation measure. In the next step, a novel method using genetic programming (GP) combined with support vector machine (SVM) confusion matrix as fitness function weight is used to extract feature vectors compressed into lower dimension space and classify the final result into ictal or interictal epochs. Results: The final application of GP SVM method improves the discriminatory performance of a classifier by reducing feature dimensionality at the same time. Members of the GP tree structure represent the features themselves and their number is automatically decided by the compression function introduced in this paper. This novel method improves the overall performance of the SVM classification by dramatically reducing the size of input feature vector. Conclusion: According to results, the accuracy of this algorithm is very high and comparable, or even superior to other automatic detection algorithms. In combination with the great efficiency, this algorithm can be used in real-time epilepsy detection applications. From the results of the algorithm's classification, we can observe high sensitivity, specificity results, except for the Generalized Tonic Clonic Seizure (GTCS). As the next step, the optimization of the compression stage and final SVM evaluation stage is in place. More data need to be obtained on GTCS to improve the overall classification score for GTCS.Web of Science142449243

    Image Compression Using SPIHT with Modified Spatial Orientation Trees

    Get PDF
    AbstractA new way of reordering spatial orientation tree of SPIHT for improving compression efficiencies for monochrome and color images has been proposed. Reordering ensures that SPIHT algorithm codes more significant information in the initial bits. List of insignificant pixels and sets are initialized with fewer number of coefficients compared to conventional SPIHT for monochrome images. For color images an altered parent offspring relationship and an extra level of wavelet decomposition on chrominance planes were performed. PSNR improvement of 32.06% was achieved at 0.01 bpp for monochrome images and 19.76% for color images at 0.05 bpp compared to conventional schemes

    Wavelet-Based Embedded Rate Scalable Still Image Coders: A review

    Get PDF
    Embedded scalable image coding algorithms based on the wavelet transform have received considerable attention lately in academia and in industry in terms of both coding algorithms and standards activity. In addition to providing a very good coding performance, the embedded coder has the property that the bit stream can be truncated at any point and still decodes a reasonably good image. In this paper we present some state-of-the-art wavelet-based embedded rate scalable still image coders. In addition, the JPEG2000 still image compression standard is presented.

    Contemporary Affirmation of SPIHT Improvements in Image Coding

    Get PDF
    Set partitioning in hierarchal trees (SPIHT) is actually a widely-used compression algorithm for wavelet altered images. On most algorithms developed, SPIHT algorithm from the time its introduction in 1996 for image compression has got lots of interest. Though SPIHT is considerably simpler and efficient than several present compression methods since it's a completely inserted codec, provides good image quality, large PSNR, optimized for modern image transmission, efficient conjunction with error defense, form information on demand and hence element powerful error correction decreases from starting to finish but still it has some downsides that need to be taken away for its better use therefore since its development it has experienced many adjustments in its original model. This document presents a survey on several different improvements in SPIHT in certain fields as velocity, redundancy, quality, error resilience, sophistication, and compression ratio and memory requirement

    Ultrafast and Efficient Scalable Image Compression Algorithm

    Get PDF
    Wavelet-based image compression algorithms have good performance and produce a rate scalable bitstream that can be decoded efficiently at several bit rates. Unfortunately, the discrete wavelet transform (DWT) has relatively high computational complexity. On the other hand, the discrete cosine transform (DCT) has low complexity and excellent compaction properties. Unfortunately, it is non-local, which necessitates implementing it as a block-based transform leading to the well-known blocking artifacts at the edges of the DCT blocks. This paper proposes a very fast and rate scalable algorithm that exploits the low complexity of DCT and the low complexity of the set partitioning technique used by the wavelet-based algorithms. Like JPEG, the proposed algorithm first transforms the image using block-based DCT. Then, it rearranges the DCT coefficients into a wavelet-like structure. Finally, the rearranged image is coded using a modified version of the SPECK algorithm, which is one of the best well-known wavelet-based algorithms. The modified SPECK consumes slightly less computer memory, has slightly lower complexity and slightly better performance than the original SPECK. The experimental results demonstrated that the proposed algorithm has competitive performance and high processing speed. Consequently, it has the best performance to complexity ratio among all the current rate scalable algorithms
    corecore