651 research outputs found

    Mathematical transforms and image compression: A review

    Get PDF
    It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed

    Alpha Channel Fragile Watermarking for Color Image Integrity Protection

    Get PDF
    This paper presents a fragile watermarking algorithm`m for the protection of the integrity of color images with alpha channel. The system is able to identify modified areas with very high probability, even with small color or transparency changes. The main characteristic of the algorithm is the embedding of the watermark by modifying the alpha channel, leaving the color channels untouched and introducing a very small error with respect to the host image. As a consequence, the resulting watermarked images have a very high peak signal-to-noise ratio. The security of the algorithm is based on a secret key defining the embedding space in which the watermark is inserted by means of the Karhunen–Loève transform (KLT) and a genetic algorithm (GA). Its high sensitivity to modifications is shown, proving the security of the whole system

    Image processing algorithms employing two-dimensional Karhunen-Loeve Transform

    Get PDF
    In the fields of image processing and pattern recognition there is an important problem of acquiring, gathering, storing and processing large volumes of data. The most frequently used solution making these data reduced is a compression, which in many cases leads also to the speeding-up further computations. One of the most frequently employed approaches is an image handling by means of Principal Component Analysis and Karhunen-Loeve Transform, which are well known statistical tools used in many areas of applied science. Their main property is the possibility of reducing the volume of data required for its optimal representation while preserving its specific characteristics.The paper presents selected image processing algorithms such as compression, scrambling (coding) and information embedding (steganography) and their realizations employing the twodimensional Karhunen-Loeve Transform (2DKLT), which is superior to the standard, onedimensional KLT since it represents images respecting their spatial properties. The principles of KLT and 2DKLT as well as sample implementations and experiments performed on the standard benchmark datasets are presented. The results show that the 2DKLT employed in the above applications gives obvious advantages in comparison to certain standard algorithms, such as DCT, FFT and wavelets

    Image processing algorithms employing two-dimensional Karhunen-Loeve Transform

    Get PDF
    In the fields of image processing and pattern recognition there is an important problem of acquiring, gathering, storing and processing large volumes of data. The most frequently used solution making these data reduced is a compression, which in many cases leads also to the speeding-up further computations. One of the most frequently employed approaches is an image handling by means of Principal Component Analysis and Karhunen-Loeve Transform, which are well known statistical tools used in many areas of applied science. Their main property is the possibility of reducing the volume of data required for its optimal representation while preserving its specific characteristics.The paper presents selected image processing algorithms such as compression, scrambling (coding) and information embedding (steganography) and their realizations employing the twodimensional Karhunen-Loeve Transform (2DKLT), which is superior to the standard, onedimensional KLT since it represents images respecting their spatial properties. The principles of KLT and 2DKLT as well as sample implementations and experiments performed on the standard benchmark datasets are presented. The results show that the 2DKLT employed in the above applications gives obvious advantages in comparison to certain standard algorithms, such as DCT, FFT and wavelets

    Copyright Authentication By Using Karhunen-Loeve Transform

    Get PDF
    Authentication is a raised as one of the important subject in field of security. So many techniques for improving authentication were appeared during the last three decades.This paper presents authentication by using Karhunen-Loeve transformation for copyright authentication in digital image, where four various size of watermarking image used to embed them inside the least significant bits of the cover image. The application proved that using Karhunen-Loeve transformation is very useful to improve the authentication, where the watermarking image will appear clearly only when using all eigenvalues to retrieve the watermarking from the resulted image. The number of eigenvalues were studied to give their effect on the robustness of the authentication, the direct proportional relationship appeared between the number of used eigenvalues and the authentication

    A Novel Hybrid Linear Predictive Coding � Discrete Cosine Transform based Compression

    Get PDF
    Image compression is a type of data compression applied to digital images, for reducing the cost for their storage and transmission. Algorithms may take advantage of Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic compression methods. Image compression may be lossy or lossless. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless. On useful technique used for lossy compression is the discrete cosine transform (DCT) that helps separate the image into parts (or spectral sub-bands) of differing importance (with respect to the image's visual quality). The DCT is similar to the discrete Fourier transform in the sense that it transforms a signal or image from the spatial domain to the frequency domain. This paper proposes a hybrid lossy compression technique using Linear Predictive Coding (LPC) and Discrete Cosine Transform (DCT) to provide superior compression ratios

    Zerotree design for image compression: toward weighted universal zerotree coding

    Get PDF
    We consider the problem of optimal, data-dependent zerotree design for use in weighted universal zerotree codes for image compression. A weighted universal zerotree code (WUZC) is a data compression system that replaces the single, data-independent zerotree of Said and Pearlman (see IEEE Transactions on Circuits and Systems for Video Technology, vol.6, no.3, p.243-50, 1996) with an optimal collection of zerotrees for good image coding performance across a wide variety of possible sources. We describe the weighted universal zerotree encoding and design algorithms but focus primarily on the problem of optimal, data-dependent zerotree design. We demonstrate the performance of the proposed algorithm by comparing, at a variety of target rates, the performance of a Said-Pearlman style code using the standard zerotree to the performance of the same code using a zerotree designed with our algorithm. The comparison is made without entropy coding. The proposed zerotree design algorithm achieves, on a collection of combined text and gray-scale images, up to 4 dB performance improvement over a Said-Pearlman zerotree

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    Novel sparse OBC based distributed arithmetic architecture for matrix transforms

    Get PDF
    Inner product (IP) forms the basis of a number of signal processing algorithms and applications such as transforms, filters, communication systems etc. Distributed arithmetic (DA) provides an effective methodology to implement IP of vectors and matrices using a simple combination of memory elements, adders and shifters instead of lumped multipliers. This bit level rearrangement results in much higher computational efficiencies and yields compact designs highly suited for high performance resource constrained applications. Offset binary coding (OBC) is an effective technique to further optimize the DA, and allows us to reduce the memory requirements by a factor of two, with minimum additional computational complexity. This makes OBC-DA attractive for applications that are both resource and memory constrained. In addition, sparse matrix factorization techniques can be exploited to further reduce the size of the DA-ROMs. In this paper, the design and implementation of a novel OBC based DA is demonstrated using a generic architecture for implementing discrete orthogonal transforms (DOTs). Implementation is performed on the Xilinx Virtex-II Pro field programmable gate array (FPGA), and a detailed comparison between conventional and OBC based DA is presented to highlight the trade offs in various design metrics including performance, area and power
    corecore