642 research outputs found

    Adaptive edge-based prediction for lossless image compression

    Get PDF
    Many lossless image compression methods have been suggested with established results hard to surpass. However there are some aspects that can be considered to improve the performance further. This research focuses on two-phase prediction-encoding method, separately studying each and suggesting new techniques.;In the prediction module, proposed Edge-Based-Predictor (EBP) and Least-Squares-Edge-Based-Predictor (LS-EBP) emphasizes on image edges and make predictions accordingly. EBP is a gradient based nonlinear adaptive predictor. EBP switches between prediction-rules based on few threshold parameters automatically determined by a pre-analysis procedure, which makes a first pass. The LS-EBP also uses these parameters, but optimizes the prediction for each pre-analysis assigned edge location, thus applying least-square approach only at the edge points.;For encoding module: a novel Burrows Wheeler Transform (BWT) inspired method is suggested, which performs better than applying the BWT directly on the images. We also present a context-based adaptive error modeling and encoding scheme. When coupled with the above-mentioned prediction schemes, the result is the best-known compression performance in the genre of compression schemes with same time and space complexity

    Model for Estimation of Bounds in Digital Coding of Seabed Images

    Get PDF
    This paper proposes the novel model for estimation of bounds in digital coding of images. Entropy coding of images is exploited to measure the useful information content of the data. The bit rate achieved by reversible compression using the rate-distortion theory approach takes into account the contribution of the observation noise and the intrinsic information of hypothetical noise-free image. Assuming the Laplacian probability density function of the quantizer input signal, SQNR gains are calculated for image predictive coding system with non-adaptive quantizer for white and correlated noise, respectively. The proposed model is evaluated on seabed images. However, model presented in this paper can be applied to any signal with Laplacian distribution

    Kompresija slika bez gubitaka uz iskoriơtavanje tokovnog modela za izvođenje na viơejezgrenim računalima

    Get PDF
    Image and video coding play a critical role in present multimedia systems ranging from entertainment to specialized applications such as telemedicine. Usually, they are hand–customized for every intended architecture in order to meet performance requirements. This approach is neither portable nor scalable. With the advent of multicores new challenges emerged for programmers related to both efficient utilization of additional resources and scalable performance. For image and video processing applications, streaming model of computation showed to be effective in tackling these challenges. In this paper, we report the efforts to improve the execution performance of the CBPC, our compute intensive lossless image compression algorithm described in [1]. The algorithm is based on highly adaptive and predictive modeling, outperforming many other methods in compression efficiency, although with increased complexity. We employ a high–level performance optimization approach which exploits streaming model for scalability and portability. We obtain this by detecting computationally demanding parts of the algorithm and implementing them in StreamIt, an architecture–independent stream language which goal is to improve programming productivity and parallelization efficiency by exposing the parallelism and communication pattern. We developed an interface that enables the integration and hosting of streaming kernels into the host application developed in general–purpose language.Postupci obrade slikovnih podataka su iznimno zastupljeni u postojećim multimedijskim sustavima, počev od zabavnih sustava pa do specijaliziranih aplikacija u telemedicini. Vrlo često, zbog svojih računskih zahtjeva, ovi programski odsječci su iznimno optimirani i to na niskoj razini, ĆĄto predstavlja poteĆĄkoće u prenosivosti i skalabilnosti konačnog rjeĆĄenja. Nadolaskom viĆĄejezgrenih računala pojavljuju se novi izazovi kao ĆĄto su učinkovito iskoriĆĄtavanje računskih jezgri i postizanje skalabilnosti rjeĆĄenja obzirom na povećanje broja jezgri. U ovom radu prikazan je novi pristup poboljĆĄanja izvedbenih performansi metode za kompresiju slika bez gubitaka CBPC koja se odlikuje adaptivnim modelom predviđanja koji omogućuje postizanje boljih stupnjeva kompresije uz povećanje računske sloĆŸenosti [1]. Pristup koji je primjenjen sastoji se u implementaciji računski zahtjevnog predikcijskog modela u tokovnom programskom jeziku koji omogućuje paralelizaciju izvornog programa. Ovako projektiran predikcijski model moĆŸe se iskoristiti kroz sučelje koje smo razvili a koje omogućuje pozivanje tokovnih računskih modula i njihovo paralelno izvođenje uz iskoriĆĄtavanje viĆĄe jezgri

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    A High-Performance Lossless Compression Scheme for EEG Signals Using Wavelet Transform and Neural Network Predictors

    Get PDF
    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications

    High-Performance Lossless Compression of Hyperspectral Remote Sensing Scenes Based on Spectral Decorrelation

    Get PDF
    The capacity of the downlink channel is a major bottleneck for applications based on remotesensing hyperspectral imagery (HSI). Data compression is an essential tool to maximize the amountof HSI scenes that can be retrieved on the ground. At the same time, energy and hardware constraintsof spaceborne devices impose limitations on the complexity of practical compression algorithms.To avoid any distortion in the analysis of the HSI data, only lossless compression is considered in thisstudy. This work aims at finding the most advantageous compression-complexity trade-off withinthe state of the art in HSI compression. To do so, a novel comparison of the most competitive spectraldecorrelation approaches combined with the best performing low-complexity compressors of thestate is presented. Compression performance and execution time results are obtained for a set of47 HSI scenes produced by 14 different sensors in real remote sensing missions. Assuming onlya limited amount of energy is available, obtained data suggest that the FAPEC algorithm yields thebest trade-off. When compared to the CCSDS 123.0-B-2 standard, FAPEC is 5.0 times faster andits compressed data rates are on average within 16% of the CCSDS standard. In scenarios whereenergy constraints can be relaxed, CCSDS 123.0-B-2 yields the best average compression results of allevaluated methods
    • 

    corecore