1,466 research outputs found

    Robust Object-Based Watermarking Using SURF Feature Matching and DFT Domain

    Get PDF
    In this paper we propose a robust object-based watermarking method, in which the watermark is embedded into the middle frequencies band of the Discrete Fourier Transform (DFT) magnitude of the selected object region, altogether with the Speeded Up Robust Feature (SURF) algorithm to allow the correct watermark detection, even if the watermarked image has been distorted. To recognize the selected object region after geometric distortions, during the embedding process the SURF features are estimated and stored in advance to be used during the detection process. In the detection stage, the SURF features of the distorted image are estimated and match them with the stored ones. From the matching result, SURF features are used to compute the Affine-transformation parameters and the object region is recovered. The quality of the watermarked image is measured using the Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) and the Visual Information Fidelity (VIF). The experimental results show the proposed method provides robustness against several geometric distortions, signal processing operations and combined distortions. The receiver operating characteristics (ROC) curves also show the desirable detection performance of the proposed method. The comparison with a previously reported methods based on different techniques is also provided

    A Generative Model of Natural Texture Surrogates

    Full text link
    Natural images can be viewed as patchworks of different textures, where the local image statistics is roughly stationary within a small neighborhood but otherwise varies from region to region. In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch. To model the statistics of these texture parameters, we then developed suitable nonlinear transformations of the parameters that allowed us to fit their joint statistics with a multivariate Gaussian distribution. We find that the first 200 principal components contain more than 99% of the variance and are sufficient to generate textures that are perceptually extremely close to those generated with all 655 components. We demonstrate the usefulness of the model in several ways: (1) We sample ensembles of texture patches that can be directly compared to samples of patches from the natural image database and can to a high degree reproduce their perceptual appearance. (2) We further developed an image compression algorithm which generates surprisingly accurate images at bit rates as low as 0.14 bits/pixel. Finally, (3) We demonstrate how our approach can be used for an efficient and objective evaluation of samples generated with probabilistic models of natural images.Comment: 34 pages, 9 figure

    Image Compression Techniques Comparative Analysis using SVD-WDR and SVD-WDR with Principal Component Analysis

    Get PDF
    The image processing is the technique which can process the digital information stored in the form of pixels. The image compression is the technique which can reduce size of the image without compromising quality of the image. The image compression techniques can classified into lossy and loss-less. In this research work, the technique is proposed which is SVD-WDR with PCA for lossy image compression. The PCA algorithm is applied which will select the extracted pixels from the image. The simulation of proposed technique is done in MATLAB and it has been analyzed that it performs well in terms of various parameters. The proposed and existing algorithms are implemented in MATLAB and it is been analyzed that proposed technique performs well in term of PSNR, MSE, SSIM and compression rate. In proposed technique the image is firstly compressed by WDR technique and then wavelet transform is applied on it. After extracting features with wavelet transform the patches are created and patches are sorted in order to perform compression by using decision tree. Decision tree sort the patches according to NRL order that means it define root node which maximum weight, left node which has less weight than root node and right node which has minimum weight. In this way the patches are sorted in descending order in terms of its weight (information). Now we can see the leaf nodes have the least amount of information (weight). In order to achieve compression of the image the leaf nodes which have least amount of information are discarded to reconstruct the image. Then inverse wavelet transform is applied to decompress the image. When the PCA technique is applied decision tree classifier the features which are not required are removed from the image in the efficient manner and increase compression ratio

    Cellular neural networks, Navier-Stokes equation and microarray image reconstruction

    Get PDF
    Copyright @ 2011 IEEE.Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier–Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time

    Contrast Enhancement for JPEG Images in the Compressed Domain

    Get PDF
    With the increase in digitization, there has been a great demand for data storage with effective techniques for data computation. As these days, a lot of data is being transferred over internet in the form of images, data storage is a prime concern, for which there is a requirement of image compression without losing the important details of the image. Digital image compression finds its applications in various fields like Medical, Automation, Defense, Photography etc. which also requires that the image produced should be visibly pleasing with sharp and clear details. The latter is achieved by a pre-processing technique called Image Enhancement.This research project is based upon the contrast enhancement of the color Images, where each color R-G-B channel is separately analyzed in the Y-Cb-Cr channel, in the compressed domain. The Discrete Cosine Transform is used as the compressed domain and further analysis is made on the block coefficients of the DCT where the block size considered is 8x8. Each DCT block contains one DC coefficient and 63 AC coefficients. The DCT coefficients are analyzed on the basis of their statistical behaviour. It is seen that the DC coefficient of each block DCT follow Gaussian distribution and the AC coefficients follow the Laplacian distribution .The DC coefficient being the mean value of the block DCT, is observed to be affecting the illumination of the image whereas the remaining 63 coefficients i.e. AC coefficients of the block DCT affected the contrast of the image. This thesis investigates a novel method for enhancing the image contrast based on the statistical behaviour of the block DCT coefficients. Furthermore, we use the concept of coefficient of variation (Cv) for arriving at a DC scaling factor required to modify the original DC coefficient value of each block. We also evaluate AC scaling factor by band analysis of each block based upon their contrast and entropy bands. The proposed work analyses both the DC coefficient and the 63 AC coefficients of each block separately

    LIDAR data classification and compression

    Get PDF
    Airborne Laser Detection and Ranging (LIDAR) data has a wide range of applications in agriculture, archaeology, biology, geology, meteorology, military and transportation, etc. LIDAR data consumes hundreds of gigabytes in a typical day of acquisition, and the amount of data collected will continue to grow as sensors improve in resolution and functionality. LIDAR data classification and compression are therefore very important for managing, visualizing, analyzing and using this huge amount of data. Among the existing LIDAR data classification schemes, supervised learning has been used and can obtain up to 96% of accuracy. However some of the features used are not readily available, and the training data is also not always available in practice. In existing LIDAR data compression schemes, the compressed size can be 5%-23% of the original size, but still could be in the order of gigabyte, which is impractical for many applications. The objectives of this dissertation are (1) to develop LIDAR classification schemes that can classify airborne LIDAR data more accurately without some features or training data that existing work requires; (2) to explore lossy compression schemes that can compress LIDAR data at a much higher compression rate than is currently available. We first investigate two independent ways to classify LIDAR data depending on the availability of training data: when training data is available, we use supervised machine learning techniques such as support vector machine (SVM); when training data is not readily available, we develop an unsupervised classification method that can classify LIDAR data as good as supervised classification methods. Experimental results show that the accuracy of our classification results are over 99%. We then present two new lossy LIDAR data compression methods and compare their performance. The first one is a wavelet based compression scheme while the second one is geometry based. Our new geometry based compression is a geometry and statistics driven LIDAR point-cloud compression method which combines both application knowledge and scene content to enable fast transmission from the sensor platform while preserving the geometric properties of objects within a scene. The new algorithm is based on the idea of compression by classification. It utilizes the unique height function simplicity as well as the local spatial coherence and linearity of the aerial LIDAR data and can automatically compress the data to the desired level-of-details defined by the user. Either of the two developed classification methods can be used to automatically detect regions that are not locally linear such as vegetations or trees. In those regions, the local statistics descriptions, such as mean, variance, expectation, etc., are stored to efficiently represent the region and restore the geometry in the decompression phase. The new geometry-based compression schemes for building and ground data can compress efficiently and significantly reduce the file size, while retaining a good fit for the scalable "zoom in" requirements. Experimental results show that compared with existing LIDAR lossy compression work, our proposed approach achieves two orders of magnitude lower bit rate with the same quality, making it feasible for applications that were not practical before. The ability to store information into a database and query them efficiently becomes possible with the proposed highly efficient compression scheme.Includes bibliographical references (pages 106-116)
    corecore