33,100 research outputs found

    Hybrid Approaches to Image Coding: A Review

    Full text link
    Nowadays, the digital world is most focused on storage space and speed. With the growing demand for better bandwidth utilization, efficient image data compression techniques have emerged as an important factor for image data transmission and storage. To date, different approaches to image compression have been developed like the classical predictive coding, popular transform coding and vector quantization. Several second generation coding schemes or the segmentation based schemes are also gaining popularity. Practically efficient compression systems based on hybrid coding which combines the advantages of different traditional methods of image coding have also been developed over the years. In this paper, different hybrid approaches to image compression are discussed. Hybrid coding of images, in this context, deals with combining two or more traditional approaches to enhance the individual methods and achieve better-quality reconstructed images with higher compression ratio. Literature on hybrid techniques of image coding over the past years is also reviewed. An attempt is made to highlight the neuro-wavelet approach for enhancing coding efficiency.Comment: 7 pages, 3 figure

    A Survey: Various Techniques of Image Compression

    Full text link
    This paper addresses about various image compression techniques. On the basis of analyzing the various image compression techniques this paper presents a survey of existing research papers. In this paper we analyze different types of existing method of image compression. Compression of an image is significantly different then compression of binary raw data. To solve these use different types of techniques for image compression. Now there is question may be arise that how to image compress and which types of technique is used. For this purpose there are basically two types are method are introduced namely lossless and lossy image compression techniques. In present time some other techniques are added with basic method. In some area neural network genetic algorithms are used for image compression. Keywords-Image Compression; Lossless; Lossy; Redundancy; Benefits of Compression.Comment: 5 page

    Wavelet Video Coding Algorithm Based on Energy Weighted Significance Probability Balancing Tree

    Full text link
    This work presents a 3-D wavelet video coding algorithm. By analyzing the contribution of each biorthogonal wavelet basis to reconstructed signal's energy, we weight each wavelet subband according to its basis energy. Based on distribution of weighted coefficients, we further discuss a 3-D wavelet tree structure named \textbf{significance probability balancing tree}, which places the coefficients with similar probabilities of being significant on the same layer. It is implemented by using hybrid spatial orientation tree and temporal-domain block tree. Subsequently, a novel 3-D wavelet video coding algorithm is proposed based on the energy-weighted significance probability balancing tree. Experimental results illustrate that our algorithm always achieves good reconstruction quality for different classes of video sequences. Compared with asymmetric 3-D orientation tree, the average peak signal-to-noise ratio (PSNR) gain of our algorithm are 1.24dB, 2.54dB and 2.57dB for luminance (Y) and chrominance (U,V) components, respectively. Compared with temporal-spatial orientation tree algorithm, our algorithm gains 0.38dB, 2.92dB and 2.39dB higher PSNR separately for Y, U, and V components. In addition, the proposed algorithm requires lower computation cost than those of the above two algorithms.Comment: 17 pages, 2 figures, submission to Multimedia Tools and Application

    Time Complexity Analysis of Binary Space Partitioning Scheme for Image Compression

    Full text link
    Segmentation-based image coding methods provide high compression ratios when compared with traditional image coding approaches like the transform and sub band coding for low bit-rate compression applications. In this paper, a segmentation-based image coding method, namely the Binary Space Partition scheme, that divides the desired image using a recursive procedure for coding is presented. The BSP approach partitions the desired image recursively by using bisecting lines, selected from a collection of discrete optional lines, in a hierarchical manner. This partitioning procedure generates a binary tree, which is referred to as the BSP-tree representation of the desired image. The algorithm is extremely complex in computation and has high execution time. The time complexity of the BSP scheme is explored in this work.Comment: 5 pages, 5 figures, 2 tables, International Journal of Engineering and Innovative Technology; ISSN: 2277-3754 ISO 9001:200

    Image compression overview

    Full text link
    Compression plays a significant role in a data storage and a transmission. If we speak about a generall data compression, it has to be a lossless one. It means, we are able to recover the original data 1:1 from the compressed file. Multimedia data (images, video, sound...), are a special case. In this area, we can use something called a lossy compression. Our main goal is not to recover data 1:1, but only keep them visually similar. This article is about an image compression, so we will be interested only in image compression. For a human eye, it is not a huge difference, if we recover RGB color with values [150,140,138] instead of original [151,140,137]. The magnitude of a difference determines the loss rate of the compression. The bigger difference usually means a smaller file, but also worse image quality and noticable differences from the original image. We want to cover compression techniques mainly from the last decade. Many of them are variations of existing ones, only some of them uses new principes

    A Non-Blind Watermarking Scheme for Gray Scale Images in Discrete Wavelet Transform Domain using Two Subbands

    Full text link
    Digital watermarking is the process to hide digital pattern directly into a digital content. Digital watermarking techniques are used to address digital rights management, protect information and conceal secrets. An invisible non-blind watermarking approach for gray scale images is proposed in this paper. The host image is decomposed into 3-levels using Discrete Wavelet Transform. Based on the parent-child relationship between the wavelet coefficients the Set Partitioning in Hierarchical Trees (SPIHT) compression algorithm is performed on the LH3, LH2, HL3 and HL2 subbands to find out the significant coefficients. The most significant coefficients of LH2 and HL2 bands are selected to embed a binary watermark image. The selected significant coefficients are modulated using Noise Visibility Function, which is considered as the best strength to ensure better imperceptibility. The approach is tested against various image processing attacks such as addition of noise, filtering, cropping, JPEG compression, histogram equalization and contrast adjustment. The experimental results reveal the high effectiveness of the method.Comment: 9 pages, 7 figure

    Exploiting Errors for Efficiency: A Survey from Circuits to Algorithms

    Full text link
    When a computational task tolerates a relaxation of its specification or when an algorithm tolerates the effects of noise in its execution, hardware, programming languages, and system software can trade deviations from correct behavior for lower resource usage. We present, for the first time, a synthesis of research results on computing systems that only make as many errors as their users can tolerate, from across the disciplines of computer aided design of circuits, digital system design, computer architecture, programming languages, operating systems, and information theory. Rather than over-provisioning resources at each layer to avoid errors, it can be more efficient to exploit the masking of errors occurring at one layer which can prevent them from propagating to a higher layer. We survey tradeoffs for individual layers of computing systems from the circuit level to the operating system level and illustrate the potential benefits of end-to-end approaches using two illustrative examples. To tie together the survey, we present a consistent formalization of terminology, across the layers, which does not significantly deviate from the terminology traditionally used by research communities in their layer of focus.Comment: 35 page

    Recent Advance in Content-based Image Retrieval: A Literature Survey

    Full text link
    The explosive increase and ubiquitous accessibility of visual data on the Web have led to the prosperity of research activity in image search or retrieval. With the ignorance of visual content as a ranking clue, methods with text search techniques for visual retrieval may suffer inconsistency between the text words and visual content. Content-based image retrieval (CBIR), which makes use of the representation of visual content to identify relevant images, has attracted sustained attention in recent two decades. Such a problem is challenging due to the intention gap and the semantic gap problems. Numerous techniques have been developed for content-based image retrieval in the last decade. The purpose of this paper is to categorize and evaluate those algorithms proposed during the period of 2003 to 2016. We conclude with several promising directions for future research.Comment: 22 page

    Robust Coding of Encrypted Images via Structural Matrix

    Full text link
    The robust coding of natural images and the effective compression of encrypted images have been studied individually in recent years. However, little work has been done in the robust coding of encrypted images. The existing results in these two individual research areas cannot be combined directly for the robust coding of encrypted images. This is because the robust coding of natural images relies on the elimination of spatial correlations using sparse transforms such as discrete wavelet transform (DWT), which is ineffective to encrypted images due to the weak correlation between encrypted pixels. Moreover, the compression of encrypted images always generates code streams with different significance. If one or more such streams are lost, the quality of the reconstructed images may drop substantially or decoding error may exist, which violates the goal of robust coding of encrypted images. In this work, we intend to design a robust coder, based on compressive sensing with structurally random matrix, for encrypted images over packet transmission networks. The proposed coder can be applied in the scenario that Alice needs a semi-trusted channel provider Charlie to encode and transmit the encrypted image to Bob. In particular, Alice first encrypts an image using globally random permutation and then sends the encrypted image to Charlie who samples the encrypted image using a structural matrix. Through an imperfect channel with packet loss, Bob receives the compressive measurements and reconstructs the original image by joint decryption and decoding. Experimental results show that the proposed coder can be considered as an efficient multiple description coder with a number of descriptions against packet loss.Comment: 10 pages, 11 figure

    Robust Video Watermarking using Multi-Band Wavelet Transform

    Full text link
    This paper addresses copyright protection as a major security demand in digital marketplaces. Two watermarking techniques are proposed and compared for compressed and uncompressed video with the intention to show the advantages and the possible weaknesses in the schemes working in the frequency domain and in the spatial domain. In this paper a robust video watermarking method is presented. This method embeds data to the specific bands in the wavelet domain using motion estimation approach. The algorithm uses the HL and LH bands to add the watermark where the motion in these bands does not affect the quality of extracted watermark if the video is subjected to different types of malicious attacks. Watermark is embedded in an additive way using random Gaussian distribution in video sequences. The method is tested on different types of video (compressed DVD quality movie and uncompressed digital camera movie). The proposed watermarking method in frequency domain has strong robustness against some attacks such as frame dropping, frame filtering and lossy compression. The experimental results indicate that the similarity measure before and after certain attacks is very close to each other in frequency domain in comparison to the spatial domain.Comment: International Journal of Computer Science Issues, IJCSI Volume 6, Issue 1, pp44-49, November 200
    • …
    corecore