765 research outputs found

    Row-Centric Lossless Compression of Markov Images

    Full text link
    Motivated by the question of whether the recently introduced Reduced Cutset Coding (RCC) offers rate-complexity performance benefits over conventional context-based conditional coding for sources with two-dimensional Markov structure, this paper compares several row-centric coding strategies that vary in the amount of conditioning as well as whether a model or an empirical table is used in the encoding of blocks of rows. The conclusion is that, at least for sources exhibiting low-order correlations, 1-sided model-based conditional coding is superior to the method of RCC for a given constraint on complexity, and conventional context-based conditional coding is nearly as good as the 1-sided model-based coding.Comment: submitted to ISIT 201

    Empirical analysis of BWT-based lossless image compression

    Get PDF
    The Burrows-Wheeler Transformation (BWT) is a text transformation algorithm originally designed to improve the coherence in text data. This coherence can be exploited by compression algorithms such as run-length encoding or arithmetic coding. However, there is still a debate on its performance on images. Motivated by a theoretical analysis of the performance of BWT and MTF, we perform a detailed empirical study on the role of MTF in compressing images with the BWT. This research studies the compression performance of BWT on digital images using different predictors and context partitions. The major interest of the research is in finding efficient ways to make BWT suitable for lossless image compression.;This research studied three different approaches to improve the compression of image data by BWT. First, the idea of preprocessing the image data before sending it to the BWT compression scheme is studied by using different mapping and prediction schemes. Second, different variations of MTF were investigated to see which one works best for Image compression with BWT. Third, the concept of context partitioning for BWT output before it is forwarded to the next stage in the compression scheme.;For lossless image compression, this thesis proposes the removal of the MTF stage from the BWT compression pipeline and the usage of context partitioning method. The compression performance is further improved by using MED predictor on the image data along with the 8-bit mapping of the prediction residuals before it is processed by BWT.;This thesis proposes two schemes for BWT-based image coding, namely BLIC and BLICx, the later being based on the context-ordering property of the BWT. Our methods outperformed other text compression algorithms such as PPM, GZIP, direct BWT, and WinZip in compressing images. Final results showed that our methods performed better than the state of the art lossless image compression algorithms, such as JPEG-LS, JPEG2000, CALIC, EDP and PPAM on the natural images

    Kompresija slika bez gubitaka uz iskorištavanje tokovnog modela za izvođenje na višejezgrenim računalima

    Get PDF
    Image and video coding play a critical role in present multimedia systems ranging from entertainment to specialized applications such as telemedicine. Usually, they are hand–customized for every intended architecture in order to meet performance requirements. This approach is neither portable nor scalable. With the advent of multicores new challenges emerged for programmers related to both efficient utilization of additional resources and scalable performance. For image and video processing applications, streaming model of computation showed to be effective in tackling these challenges. In this paper, we report the efforts to improve the execution performance of the CBPC, our compute intensive lossless image compression algorithm described in [1]. The algorithm is based on highly adaptive and predictive modeling, outperforming many other methods in compression efficiency, although with increased complexity. We employ a high–level performance optimization approach which exploits streaming model for scalability and portability. We obtain this by detecting computationally demanding parts of the algorithm and implementing them in StreamIt, an architecture–independent stream language which goal is to improve programming productivity and parallelization efficiency by exposing the parallelism and communication pattern. We developed an interface that enables the integration and hosting of streaming kernels into the host application developed in general–purpose language.Postupci obrade slikovnih podataka su iznimno zastupljeni u postojećim multimedijskim sustavima, počev od zabavnih sustava pa do specijaliziranih aplikacija u telemedicini. Vrlo često, zbog svojih računskih zahtjeva, ovi programski odsječci su iznimno optimirani i to na niskoj razini, što predstavlja poteškoće u prenosivosti i skalabilnosti konačnog rješenja. Nadolaskom višejezgrenih računala pojavljuju se novi izazovi kao što su učinkovito iskorištavanje računskih jezgri i postizanje skalabilnosti rješenja obzirom na povećanje broja jezgri. U ovom radu prikazan je novi pristup poboljšanja izvedbenih performansi metode za kompresiju slika bez gubitaka CBPC koja se odlikuje adaptivnim modelom predviđanja koji omogućuje postizanje boljih stupnjeva kompresije uz povećanje računske složenosti [1]. Pristup koji je primjenjen sastoji se u implementaciji računski zahtjevnog predikcijskog modela u tokovnom programskom jeziku koji omogućuje paralelizaciju izvornog programa. Ovako projektiran predikcijski model može se iskoristiti kroz sučelje koje smo razvili a koje omogućuje pozivanje tokovnih računskih modula i njihovo paralelno izvođenje uz iskorištavanje više jezgri

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. In the proposed CODEC I, block-based disparity estimation/compensation (DE/DC) is performed in pixel domain. However, this results in an inefficiency when DWT is applied on the whole predictive error image that results from the DE process. This is because of the existence of artificial block boundaries between error blocks in the predictive error image. To overcome this problem, in the remaining proposed CODECs, DE/DC is performed in the wavelet domain. Due to the multiresolution nature of the wavelet domain, two methods of disparity estimation and compensation have been proposed. The first method is performing DEJDC in each subband of the lowest/coarsest resolution level and then propagating the disparity vectors obtained to the corresponding subbands of higher/finer resolution. Note that DE is not performed in every subband due to the high overhead bits that could be required for the coding of disparity vectors of all subbands. This method is being used in CODEC II. In the second method, DEJDC is performed m the wavelet-block domain. This enables disparity estimation to be performed m all subbands simultaneously without increasing the overhead bits required for the coding disparity vectors. This method is used by CODEC III. However, performing disparity estimation/compensation in all subbands would result in a significant improvement of CODEC III. To further improve the performance of CODEC ill, pioneering wavelet-block search technique is implemented in CODEC IV. The pioneering wavelet-block search technique enables the right/predicted image to be reconstructed at the decoder end without the need of transmitting the disparity vectors. In proposed CODEC V, pioneering block search is performed in all subbands of DWT decomposition which results in an improvement of its performance. Further, the CODEC IV and V are able to perform at very low bit rates(< 0.15 bpp). In CODEC VI and CODEC VII, Overlapped Block Disparity Compensation (OBDC) is used with & without the need of coding disparity vector. Our experiment results showed that no significant coding gains could be obtained for these CODECs over CODEC IV & V. All proposed CODECs m this thesis are wavelet-based stereo image coding algorithms that maximise the flexibility and benefits offered by wavelet transform technology when applied to stereo imaging. In addition the use of a baseline-JPEG coding architecture would enable the easy adaptation of the proposed algorithms within systems originally built for DCT-based coding. This is an important feature that would be useful during an era where DCT-based technology is only slowly being phased out to give way for DWT based compression technology. In addition, this thesis proposed a stereo image coding algorithm that uses JPEG-2000 technology as the basic compression engine. The proposed CODEC, named RASTER is a rate scalable stereo image CODEC that has a unique ability to preserve the image quality at binocular depth boundaries, which is an important requirement in the design of stereo image CODEC. The experimental results have shown that the proposed CODEC is able to achieve PSNR gains of up to 3.7 dB as compared to directly transmitting the right frame using JPEG-2000

    Estimating the Algorithmic Complexity of Stock Markets

    Full text link
    Randomness and regularities in Finance are usually treated in probabilistic terms. In this paper, we develop a completely different approach in using a non-probabilistic framework based on the algorithmic information theory initially developed by Kolmogorov (1965). We present some elements of this theory and show why it is particularly relevant to Finance, and potentially to other sub-fields of Economics as well. We develop a generic method to estimate the Kolmogorov complexity of numeric series. This approach is based on an iterative "regularity erasing procedure" implemented to use lossless compression algorithms on financial data. Examples are provided with both simulated and real-world financial time series. The contributions of this article are twofold. The first one is methodological : we show that some structural regularities, invisible with classical statistical tests, can be detected by this algorithmic method. The second one consists in illustrations on the daily Dow-Jones Index suggesting that beyond several well-known regularities, hidden structure may in this index remain to be identified

    Compression of Spectral Images

    Get PDF
    corecore