13,445 research outputs found

    Rate-control algorithms for non-embedded wavelet-based image coding

    Full text link
    During the last decade, there has been an increasing interest in the design of very fast wavelet image encoders focused on specific applications like interactive real-time image and video systems, running on power-constrained devices such as digital cameras, mobile phones where coding delay and/or available computing resources (working memory and power processing) are critical for proper operation. In order to reduce complexity, most of these fast wavelet image encoders are non-(SNR)-embedded and as a consequence, precise rate control is not supported. In this work, we propose some simple rate control algorithms for these kind of encoders and we analyze their impact to determine if, despite their inclusion, the global encoder is still competitive with respect to popular embedded encoders like SPIHT and JPEG2000. In this study we focus on the non-embedded LTW encoder, showing that the increase in complexity due to the rate control algorithm inclusion, maintains LTW competitive with respect to SPIHT and JPEG2000 in terms of R/D performance, coding delay and memory consumption. © Springer Science+Business Media, LLC 2011This work was funded by Spanish Ministry of education and Science under grant DPI2007-66796-C03-03.Lopez Granado, OM.; Onofre Martinez-Rach, M.; Pinol Peral, P.; Oliver Gil, JS.; Perez Malumbres, MJ. (2012). Rate-control algorithms for non-embedded wavelet-based image coding. Journal of Signal Processing Systems. 68(2):203-216. https://doi.org/10.1007/s11265-011-0598-6S203216682Antonini, M., Barlaud, M., Mathieu, P., & Daubechies, I. (1992). Image coding using wavelet transform. IEEE Transaction on Image Processing, 1(2), 205–220.Cho, Y., & Pearlman, W.A. (2007). Hierarchical dynamic range coding of wavelet subbands for fast and efficient image compression. IEEE Transactions on Image Processing, 16, 2005–2015.Chrysafis, C., Said, A., Drukarev, A., Islam, A., & Pearlman, W. (2000). SBHP—A low complexity wavelet coder. In IEEE international conference on acoustics, speech and signal processing.CIPR: http://www.cipr.rpi.edu/resource/stills/kodak.html . Center for Image Processing Research.Davis, P. J. (1975) Interpolation and approximation. Dover Publications.Grottke, S., Richter, T., & Seiler, R. (2006). Apriori rate allocation in wavelet-based image compression. In Second international conference on automated production of cross media content for multi-channel distribution, 2006. AXMEDIS ’06 (pp. 329–336). doi: 10.1109/AXMEDIS.2006.12 .Guo, J., Mitra, S., Nutter, B., & Karp, T. (2006). Backward coding of wavelet trees with fine-grained bitrate control. Journal of Computers, 1(4), 1–7. doi: 10.4304/jcp.1.4.1-7 .ISO/IEC 10918-1/ITU-T Recommendation T.81 (1992). Digital compression and coding of continuous-tone still image.ISO/IEC 15444-1 (2000). JPEG2000 image coding system.Kakadu, S. (2006). http://www.kakadusoftware.com .Kasner, J., Marcellin, M., & Hunt, B. (1999). Universal trellis coded quantization. IEEE Transactions on Image Processing, 8(12), 1677–1687. doi: 10.1109/83.806615 .Lancaster, P. (1986). Curve and surface fitting: An introduction. Academic Press.Oliver, J., & Malumbres, M. (2001). A new fast lower-tree wavelet image encoder. In Proceedings of international conference on image processing, 2001 (Vol. 3, pp. 780–783). doi: 10.1109/ICIP.2001.958236 .Oliver, J., & Malumbres, M. P. (2006). Low-complexity multiresolution image compression using wavelet lower trees. IEEE Transactions on Circuits and Systems for Video Technology, 16(11), 1437–1444.Pearlman, W. A. (2001). Trends of tree-based, set partitioning compression techniques in still and moving image systems. In Picture coding symposium.Said, A., & Pearlman, A. (1996). A new, fast and efficient image codec based on set partitioning in hierarchical trees. IEEE Transactions on Circuits, Systems and Video Technology, 6(3), 243–250.Table Curve 3D 3.0 (1998). http://www.systat.com. Systat Software Inc.Wu, X. (2001). The transform and data compression handbook, chap. Compression of wavelet transform coefficients, (pp. 347–378). CRC Press.Zhidkov, N., & Kobelkov, G. (1987). Numerical methods. Moscow: Nauka

    On the design of fast and efficient wavelet image coders with reduced memory usage

    Full text link
    Image compression is of great importance in multimedia systems and applications because it drastically reduces bandwidth requirements for transmission and memory requirements for storage. Although earlier standards for image compression were based on the Discrete Cosine Transform (DCT), a recently developed mathematical technique, called Discrete Wavelet Transform (DWT), has been found to be more efficient for image coding. Despite improvements in compression efficiency, wavelet image coders significantly increase memory usage and complexity when compared with DCT-based coders. A major reason for the high memory requirements is that the usual algorithm to compute the wavelet transform requires the entire image to be in memory. Although some proposals reduce the memory usage, they present problems that hinder their implementation. In addition, some wavelet image coders, like SPIHT (which has become a benchmark for wavelet coding), always need to hold the entire image in memory. Regarding the complexity of the coders, SPIHT can be considered quite complex because it performs bit-plane coding with multiple image scans. The wavelet-based JPEG 2000 standard is still more complex because it improves coding efficiency through time-consuming methods, such as an iterative optimization algorithm based on the Lagrange multiplier method, and high-order context modeling. In this thesis, we aim to reduce memory usage and complexity in wavelet-based image coding, while preserving compression efficiency. To this end, a run-length encoder and a tree-based wavelet encoder are proposed. In addition, a new algorithm to efficiently compute the wavelet transform is presented. This algorithm achieves low memory consumption using line-by-line processing, and it employs recursion to automatically place the order in which the wavelet transform is computed, solving some synchronization problems that have not been tackled by previous proposals. The proposed encodeOliver Gil, JS. (2006). On the design of fast and efficient wavelet image coders with reduced memory usage [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1826Palanci

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    Regularity scalable image coding based on wavelet singularity detection

    Get PDF
    In this paper, we propose an adaptive algorithm for scalable wavelet image coding, which is based on the general feature, the regularity, of images. In pattern recognition or computer vision, regularity of images is estimated from the oriented wavelet coefficients and quantified by the Lipschitz exponents. To estimate the Lipschitz exponents, evaluating the interscale evolution of the wavelet transform modulus sum (WTMS) over the directional cone of influence was proven to be a better approach than tracing the wavelet transform modulus maxima (WTMM). This is because the irregular sampling nature of the WTMM complicates the reconstruction process. Moreover, examples were found to show that the WTMM representation cannot uniquely characterize a signal. It implies that the reconstruction of signal from its WTMM may not be consistently stable. Furthermore, the WTMM approach requires much more computational effort. Therefore, we use the WTMS approach to estimate the regularity of images from the separable wavelet transformed coefficients. Since we do not concern about the localization issue, we allow the decimation to occur when we evaluate the interscale evolution. After the regularity is estimated, this information is utilized in our proposed adaptive regularity scalable wavelet image coding algorithm. This algorithm can be simply embedded into any wavelet image coders, so it is compatible with the existing scalable coding techniques, such as the resolution scalable and signal-to-noise ratio (SNR) scalable coding techniques, without changing the bitstream format, but provides more scalable levels with higher peak signal-to-noise ratios (PSNRs) and lower bit rates. In comparison to the other feature-based wavelet scalable coding algorithms, the proposed algorithm outperforms them in terms of visual perception, computational complexity and coding efficienc

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Enabling error-resilient internet broadcasting using motion compensated spatial partitioning and packet FEC for the dirac video codec

    Get PDF
    Video transmission over the wireless or wired network require protection from channel errors since compressed video bitstreams are very sensitive to transmission errors because of the use of predictive coding and variable length coding. In this paper, a simple, low complexity and patent free error-resilient coding is proposed. It is based upon the idea of using spatial partitioning on the motion compensated residual frame without employing the transform coefficient coding. The proposed scheme is intended for open source Dirac video codec in order to enable the codec to be used for Internet broadcasting. By partitioning the wavelet transform coefficients of the motion compensated residual frame into groups and independently processing each group using arithmetic coding and Forward Error Correction (FEC), robustness to transmission errors over the packet erasure wired network could be achieved. Using the Rate Compatibles Punctured Code (RCPC) and Turbo Code (TC) as the FEC, the proposed technique provides gracefully decreasing perceptual quality over packet loss rates up to 30%. The PSNR performance is much better when compared with the conventional data partitioning only methods. Simulation results show that the use of multiple partitioning of wavelet coefficient in Dirac can achieve up to 8 dB PSNR gain over its existing un-partitioned method
    corecore