3,154 research outputs found

    Rate-control algorithms for non-embedded wavelet-based image coding

    Full text link
    During the last decade, there has been an increasing interest in the design of very fast wavelet image encoders focused on specific applications like interactive real-time image and video systems, running on power-constrained devices such as digital cameras, mobile phones where coding delay and/or available computing resources (working memory and power processing) are critical for proper operation. In order to reduce complexity, most of these fast wavelet image encoders are non-(SNR)-embedded and as a consequence, precise rate control is not supported. In this work, we propose some simple rate control algorithms for these kind of encoders and we analyze their impact to determine if, despite their inclusion, the global encoder is still competitive with respect to popular embedded encoders like SPIHT and JPEG2000. In this study we focus on the non-embedded LTW encoder, showing that the increase in complexity due to the rate control algorithm inclusion, maintains LTW competitive with respect to SPIHT and JPEG2000 in terms of R/D performance, coding delay and memory consumption. © Springer Science+Business Media, LLC 2011This work was funded by Spanish Ministry of education and Science under grant DPI2007-66796-C03-03.Lopez Granado, OM.; Onofre Martinez-Rach, M.; Pinol Peral, P.; Oliver Gil, JS.; Perez Malumbres, MJ. (2012). Rate-control algorithms for non-embedded wavelet-based image coding. Journal of Signal Processing Systems. 68(2):203-216. https://doi.org/10.1007/s11265-011-0598-6S203216682Antonini, M., Barlaud, M., Mathieu, P., & Daubechies, I. (1992). Image coding using wavelet transform. IEEE Transaction on Image Processing, 1(2), 205–220.Cho, Y., & Pearlman, W.A. (2007). Hierarchical dynamic range coding of wavelet subbands for fast and efficient image compression. IEEE Transactions on Image Processing, 16, 2005–2015.Chrysafis, C., Said, A., Drukarev, A., Islam, A., & Pearlman, W. (2000). SBHP—A low complexity wavelet coder. In IEEE international conference on acoustics, speech and signal processing.CIPR: http://www.cipr.rpi.edu/resource/stills/kodak.html . Center for Image Processing Research.Davis, P. J. (1975) Interpolation and approximation. Dover Publications.Grottke, S., Richter, T., & Seiler, R. (2006). Apriori rate allocation in wavelet-based image compression. In Second international conference on automated production of cross media content for multi-channel distribution, 2006. AXMEDIS ’06 (pp. 329–336). doi: 10.1109/AXMEDIS.2006.12 .Guo, J., Mitra, S., Nutter, B., & Karp, T. (2006). Backward coding of wavelet trees with fine-grained bitrate control. Journal of Computers, 1(4), 1–7. doi: 10.4304/jcp.1.4.1-7 .ISO/IEC 10918-1/ITU-T Recommendation T.81 (1992). Digital compression and coding of continuous-tone still image.ISO/IEC 15444-1 (2000). JPEG2000 image coding system.Kakadu, S. (2006). http://www.kakadusoftware.com .Kasner, J., Marcellin, M., & Hunt, B. (1999). Universal trellis coded quantization. IEEE Transactions on Image Processing, 8(12), 1677–1687. doi: 10.1109/83.806615 .Lancaster, P. (1986). Curve and surface fitting: An introduction. Academic Press.Oliver, J., & Malumbres, M. (2001). A new fast lower-tree wavelet image encoder. In Proceedings of international conference on image processing, 2001 (Vol. 3, pp. 780–783). doi: 10.1109/ICIP.2001.958236 .Oliver, J., & Malumbres, M. P. (2006). Low-complexity multiresolution image compression using wavelet lower trees. IEEE Transactions on Circuits and Systems for Video Technology, 16(11), 1437–1444.Pearlman, W. A. (2001). Trends of tree-based, set partitioning compression techniques in still and moving image systems. In Picture coding symposium.Said, A., & Pearlman, A. (1996). A new, fast and efficient image codec based on set partitioning in hierarchical trees. IEEE Transactions on Circuits, Systems and Video Technology, 6(3), 243–250.Table Curve 3D 3.0 (1998). http://www.systat.com. Systat Software Inc.Wu, X. (2001). The transform and data compression handbook, chap. Compression of wavelet transform coefficients, (pp. 347–378). CRC Press.Zhidkov, N., & Kobelkov, G. (1987). Numerical methods. Moscow: Nauka

    Zerotree design for image compression: toward weighted universal zerotree coding

    Get PDF
    We consider the problem of optimal, data-dependent zerotree design for use in weighted universal zerotree codes for image compression. A weighted universal zerotree code (WUZC) is a data compression system that replaces the single, data-independent zerotree of Said and Pearlman (see IEEE Transactions on Circuits and Systems for Video Technology, vol.6, no.3, p.243-50, 1996) with an optimal collection of zerotrees for good image coding performance across a wide variety of possible sources. We describe the weighted universal zerotree encoding and design algorithms but focus primarily on the problem of optimal, data-dependent zerotree design. We demonstrate the performance of the proposed algorithm by comparing, at a variety of target rates, the performance of a Said-Pearlman style code using the standard zerotree to the performance of the same code using a zerotree designed with our algorithm. The comparison is made without entropy coding. The proposed zerotree design algorithm achieves, on a collection of combined text and gray-scale images, up to 4 dB performance improvement over a Said-Pearlman zerotree

    Exploiting Prior Knowledge in Compressed Sensing Wireless ECG Systems

    Full text link
    Recent results in telecardiology show that compressed sensing (CS) is a promising tool to lower energy consumption in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant performance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods.Comment: Accepted for publication at IEEE Journal of Biomedical and Health Informatic

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Regularity scalable image coding based on wavelet singularity detection

    Get PDF
    In this paper, we propose an adaptive algorithm for scalable wavelet image coding, which is based on the general feature, the regularity, of images. In pattern recognition or computer vision, regularity of images is estimated from the oriented wavelet coefficients and quantified by the Lipschitz exponents. To estimate the Lipschitz exponents, evaluating the interscale evolution of the wavelet transform modulus sum (WTMS) over the directional cone of influence was proven to be a better approach than tracing the wavelet transform modulus maxima (WTMM). This is because the irregular sampling nature of the WTMM complicates the reconstruction process. Moreover, examples were found to show that the WTMM representation cannot uniquely characterize a signal. It implies that the reconstruction of signal from its WTMM may not be consistently stable. Furthermore, the WTMM approach requires much more computational effort. Therefore, we use the WTMS approach to estimate the regularity of images from the separable wavelet transformed coefficients. Since we do not concern about the localization issue, we allow the decimation to occur when we evaluate the interscale evolution. After the regularity is estimated, this information is utilized in our proposed adaptive regularity scalable wavelet image coding algorithm. This algorithm can be simply embedded into any wavelet image coders, so it is compatible with the existing scalable coding techniques, such as the resolution scalable and signal-to-noise ratio (SNR) scalable coding techniques, without changing the bitstream format, but provides more scalable levels with higher peak signal-to-noise ratios (PSNRs) and lower bit rates. In comparison to the other feature-based wavelet scalable coding algorithms, the proposed algorithm outperforms them in terms of visual perception, computational complexity and coding efficienc

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
    corecore