442 research outputs found

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Implementation and performance study of image data hiding/watermarking schemes

    Get PDF
    Two data hiding / watermarking techniques for grayscale and color images are presented. One of them is DCT based, another uses DFT to embed data. Both methods were implemented in software utilizing C/C++. The complete listings of these programs are included. A comprehensive reliability analysis was performed on both schemes, subjecting watermarked images to JPEG, SPIRT and MPEG-2 compressions. In addition, the pictures were examined by exposing them to common signal processing operations such as image resizing, rotation, histogram equalization and stretching, random, uniform and Gaussian noise addition, brightness and contrast variations, gamma correction, image sharpening and softening, edge enhancement, manipulation of a channel bit number and others. Methods were compared to each other. It has been shown that the DCT method is more robust and, hence, suitable for watermarking purposes. The DFT scheme exhibits less robustness, but due to its higher capacity is perfect for data hiding purposes

    Robust Image Watermarking Using QR Factorization In Wavelet Domain

    Get PDF
    A robust blind image watermarking algorithm in wavelet transform domain (WT) based on QR factorization, and quantization index modulation (QIM) technique is presented for legal protection of digital images. The host image is decomposed into wavelet subbands, and then the approximation subband is QR factorized. The secret watermark bit is embedded into the R vector in QR using QIM. The experimental results show that the proposed algorithm preserves the high perceptual quality. It also sustains against JPEG compression, and other image processing attacks. The comparison analysis demonstrates the proposed scheme has better performance in imperceptibility and robustness than the previously reported watermarking algorithms

    Side-Information For Steganography Design And Detection

    Get PDF
    Today, the most secure steganographic schemes for digital images embed secret messages while minimizing a distortion function that describes the local complexity of the content. Distortion functions are heuristically designed to predict the modeling error, or in other words, how difficult it would be to detect a single change to the original image in any given area. This dissertation investigates how both the design and detection of such content-adaptive schemes can be improved with the use of side-information. We distinguish two types of side-information, public and private: Public side-information is available to the sender and at least in part also to anybody else who can observe the communication. Content complexity is a typical example of public side-information. While it is commonly used for steganography, it can also be used for detection. In this work, we propose a modification to the rich-model style feature sets in both spatial and JPEG domain to inform such feature sets of the content complexity. Private side-information is available only to the sender. The previous use of private side-information in steganography was very successful but limited to steganography in JPEG images. Also, the constructions were based on heuristic with little theoretical foundations. This work tries to remedy this deficiency by introducing a scheme that generalizes the previous approach to an arbitrary domain. We also put forward a theoretical investigation of how to incorporate side-information based on a model of images. Third, we propose to use a novel type of side-information in the form of multiple exposures for JPEG steganography

    Development of Some Spatial-domain Preprocessing and Post-processing Algorithms for Better 2-D Up-scaling

    Get PDF
    Image super-resolution is an area of great interest in recent years and is extensively used in applications like video streaming, multimedia, internet technologies, consumer electronics, display and printing industries. Image super-resolution is a process of increasing the resolution of a given image without losing its integrity. Its most common application is to provide better visual effect after resizing a digital image for display or printing. One of the methods of improving the image resolution is through the employment of a 2-D interpolation. An up-scaled image should retain all the image details with very less degree of blurring meant for better visual quality. In literature, many efficient 2-D interpolation schemes are found that well preserve the image details in the up-scaled images; particularly at the regions with edges and fine details. Nevertheless, these existing interpolation schemes too give blurring effect in the up-scaled images due to the high frequency (HF) degradation during the up-sampling process. Hence, there is a scope to further improve their performance through the incorporation of various spatial domain pre-processing, post-processing and composite algorithms. Therefore, it is felt that there is sufficient scope to develop various efficient but simple pre-processing, post-processing and composite schemes to effectively restore the HF contents in the up-scaled images for various online and off-line applications. An efficient and widely used Lanczos-3 interpolation is taken for further performance improvement through the incorporation of various proposed algorithms. The various pre-processing algorithms developed in this thesis are summarized here. The term pre-processing refers to processing the low-resolution input image prior to image up-scaling. The various pre-processing algorithms proposed in this thesis are: Laplacian of Laplacian based global pre-processing (LLGP) scheme; Hybrid global pre-processing (HGP); Iterative Laplacian of Laplacian based global pre-processing (ILLGP); Unsharp masking based pre-processing (UMP); Iterative unsharp masking (IUM); Error based up-sampling(EU) scheme. The proposed algorithms: LLGP, HGP and ILLGP are three spatial domain preprocessing algorithms which are based on 4th, 6th and 8th order derivatives to alleviate nonuniform blurring in up-scaled images. These algorithms are used to obtain the high frequency (HF) extracts from an image by employing higher order derivatives and perform precise sharpening on a low resolution image to alleviate the blurring in its 2-D up-sampled counterpart. In case of unsharp masking based pre-processing (UMP) scheme, the blurred version of a low resolution image is used for HF extraction from the original version through image subtraction. The weighted version of the HF extracts are superimposed with the original image to produce a sharpened image prior to image up-scaling to counter blurring effectively. IUM makes use of many iterations to generate an unsharp mask which contains very high frequency (VHF) components. The VHF extract is the result of signal decomposition in terms of sub-bands using the concept of analysis filter bank. Since the degradation of VHF components is maximum, restoration of such components would produce much better restoration performance. EU is another pre-processing scheme in which the HF degradation due to image upscaling is extracted and is called prediction error. The prediction error contains the lost high frequency components. When this error is superimposed on the low resolution image prior to image up-sampling, blurring is considerably reduced in the up-scaled images. Various post-processing algorithms developed in this thesis are summarized in following. The term post-processing refers to processing the high resolution up-scaled image. The various post-processing algorithms proposed in this thesis are: Local adaptive Laplacian (LAL); Fuzzy weighted Laplacian (FWL); Legendre functional link artificial neural network(LFLANN). LAL is a non-fuzzy, local based scheme. The local regions of an up-scaled image with high variance are sharpened more than the region with moderate or low variance by employing a local adaptive Laplacian kernel. The weights of the LAL kernel are varied as per the normalized local variance so as to provide more degree of HF enhancement to high variance regions than the low variance counterpart to effectively counter the non-uniform blurring. Furthermore, FWL post-processing scheme with a higher degree of non-linearity is proposed to further improve the performance of LAL. FWL, being a fuzzy based mapping scheme, is highly nonlinear to resolve the blurring problem more effectively than LAL which employs a linear mapping. Another LFLANN based post-processing scheme is proposed here to minimize the cost function so as to reduce the blurring in a 2-D up-scaled image. Legendre polynomials are used for functional expansion of the input pattern-vector and provide high degree of nonlinearity. Therefore, the requirement of multiple layers can be replaced by single layer LFLANN architecture so as to reduce the cost function effectively for better restoration performance. With single layer architecture, it has reduced the computational complexity and hence is suitable for various real-time applications. There is a scope of further improvement of the stand-alone pre-processing and postprocessing schemes by combining them through composite schemes. Here, two spatial domain composite schemes, CS-I and CS-II are proposed to tackle non-uniform blurring in an up-scaled image. CS-I is developed by combining global iterative Laplacian (GIL) preprocessing scheme with LAL post-processing scheme. Another highly nonlinear composite scheme, CS-II is proposed which combines ILLGP scheme with a fuzzy weighted Laplacian post-processing scheme for more improved performance than the stand-alone schemes. Finally, it is observed that the proposed algorithms: ILLGP, IUM, FWL, LFLANN and CS-II are better algorithms in their respective categories for effectively reducing blurring in the up-scaled images

    Perceptual Video Hashing for Content Identification and Authentication

    Get PDF
    Perceptual hashing has been broadly used in the literature to identify similar contents for video copy detection. It has also been adopted to detect malicious manipulations for video authentication. However, targeting both applications with a single system using the same hash would be highly desirable as this saves the storage space and reduces the computational complexity. This paper proposes a perceptual video hashing system for content identification and authentication. The objective is to design a hash extraction technique that can withstand signal processing operations on one hand and detect malicious attacks on the other hand. The proposed system relies on a new signal calibration technique for extracting the hash using the discrete cosine transform (DCT) and the discrete sine transform (DST). This consists of determining the number of samples, called the normalizing shift, that is required for shifting a digital signal so that the shifted version matches a certain pattern according to DCT/DST coefficients. The rationale for the calibration idea is that the normalizing shift resists signal processing operations while it exhibits sensitivity to local tampering (i.e., replacing a small portion of the signal with a different one). While the same hash serves both applications, two different similarity measures have been proposed for video identification and authentication, respectively. Through intensive experiments with various types of video distortions and manipulations, the proposed system has been shown to outperform related state-of-the art video hashing techniques in terms of identification and authentication with the advantageous ability to locate tampered regions

    Digital Video Watermarking for Copyright Labelling

    Get PDF
    Penggunaan konten multimedia di internet kini semakin berkembang, terutama dalam video digital. Pemalsuan, penipuan, dan penjarahan konten video menyebabkan masalah karena pasokan sumber daya untuk berbagi konten. Hak cipta menjadi hal yang krusial dalam video digital untuk menghindari manipulasi dari pihak yang tidak bertanggung jawab. Ada banyak cara yang bisa dilakukan untuk melabeli hak cipta ke dalam sebuah video. Salah satunya adalah digital watermarking. Pembuatan air digital digunakan untuk mencegah replikasi ilegal atau eksploitasi konten digital, melindungi konten digital, dan menghindari manipulasi multimedia secara ilegal. Penggunaan beberapa metode seperti Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT), dan Discrete Fourier Transform (DFT) untuk pelabelan hak cipta video akan dibandingkan berdasarkan imperceptibility dan robustness setelah beberapa manipulasi diterapkan ke dalam video yang disisipkan-watermark. Dari segi imperceptibility, metode DWT menghasilkan nilai PSNR sebesar 45,62435 dB, metode DCT menghasilkan nilai PSNR sebesar 45.89422 dB, dan metode DFT menghasilkan nilai PSNR sebesar 45.77747 dB. Rerata PSNR dari ketiga metode tersebut adalah 45.76535 dB. Artinya, video yang disisipkan tanda air tampak mirip dengan yang disisipkan. Dengan demikian, dari percobaan dapat disimpulkan bahwa metode DWT, DCT, dan DFT yang diterapkan menunjukkan bahwa video yang diberi watermark masih dalam kualitas yang baik yaitu wajar dan memenuhi imperceptibility. Dari segi kekokohan, NC mean metode DCT adalah 0,63974, metode DCT adalah 0,755839, dan metode DFT adalah 0,745442. Hal ini menunjukkan bahwa hasil ekstraksi watermark dari ketiga metode tersebut sama dengan hasil watermark aslinya. Dengan kata lain, semua tanda air pada ketiga metode ini dapat diekstraksi dengan baik meskipun serangan dikirimkan kepada mereka. Dari tingkat uji imperceptibility dan robustness pada metode DWT, DCT, dan DFT, dapat dikatakan bahwa metode DCT lebih baik daripada metode DWT dan DFT karena performansinya yang tinggi pada PSNR dan NC
    corecore