454 research outputs found

    Source Camera Identification using Non-decimated Wavelet Transform

    Get PDF
    Source Camera identification of digital images can be performed by matching the sensor pattern noise (SPN) of the images with that of the camera reference signature. This paper presents a non-decimated wavelet based source camera identification method for digital images. The proposed algorithm applies a non-decimated wavelet transform on the input image and split the image into its wavelet sub-bands. The coefficients within the resulting wavelet high frequency sub-bands are filtered to extract the SPN of the image. Cross correlation of the image SPN and the camera reference SPN signature is then used to identify the most likely source device of the image. Experimental results were generated using images of ten cameras to identify the source camera of the images. Results show that the proposed technique generates superior results to that of the state of the art wavelet based source camera identification

    Bilateral filter in image processing

    Get PDF
    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges. It has shown to be an effective image denoising technique. It also can be applied to the blocking artifacts reduction. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. Another research interest of bilateral filter is acceleration of the computation speed. There are three main contributions of this thesis. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising. I propose an extension of the bilateral filter: multi resolution bilateral filter, where bilateral filtering is applied to the low-frequency sub-bands of a signal decomposed using a wavelet filter bank. The multi resolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. The second contribution is that I present a spatially adaptive method to reduce compression artifacts. To avoid over-smoothing texture regions and to effectively eliminate blocking and ringing artifacts, in this paper, texture regions and block boundary discontinuities are first detected; these are then used to control/adapt the spatial and intensity parameters of the bilateral filter. The test results prove that the adaptive method can improve the quality of restored images significantly better than the standard bilateral filter. The third contribution is the improvement of the fast bilateral filter, in which I use a combination of multi windows to approximate the Gaussian filter more precisely

    Texture and artifact decomposition for improving generalization in deep-learning-based deepfake detection

    Get PDF
    The harmful utilization of DeepFake technology poses a significant threat to public welfare, precipitating a crisis in public opinion. Existing detection methodologies, predominantly relying on convolutional neural networks and deep learning paradigms, focus on achieving high in-domain recognition accuracy amidst many forgery techniques. However, overseeing the intricate interplay between textures and artifacts results in compromised performance across diverse forgery scenarios. This paper introduces a groundbreaking framework, denoted as Texture and Artifact Detector (TAD), to mitigate the challenge posed by the limited generalization ability stemming from the mutual neglect of textures and artifacts. Specifically, our approach delves into the similarities among disparate forged datasets, discerning synthetic content based on the consistency of textures and the presence of artifacts. Furthermore, we use a model ensemble learning strategy to judiciously aggregate texture disparities and artifact patterns inherent in various forgery types, thereby enabling the model’s generalization ability. Our comprehensive experimental analysis, encompassing extensive intra-dataset and cross-dataset validations along with evaluations on both video sequences and individual frames, confirms the effectiveness of TAD. The results from four benchmark datasets highlight the significant impact of the synergistic consideration of texture and artifact information, leading to a marked improvement in detection capabilities

    Edge-enhancing image smoothing.

    Get PDF
    Xu, Yi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 62-69).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Organization --- p.4Chapter 2 --- Background and Motivation --- p.7Chapter 2.1 --- ID Mondrian Smoothing --- p.9Chapter 2.2 --- 2D Formulation --- p.13Chapter 3 --- Solver --- p.16Chapter 3.1 --- More Analysis --- p.20Chapter 4 --- Edge Extraction --- p.26Chapter 4.1 --- Related work --- p.26Chapter 4.2 --- Method and Results --- p.28Chapter 4.3 --- Summary --- p.32Chapter 5 --- Image Abstraction and Pencil Sketching --- p.35Chapter 5.1 --- Related Work --- p.35Chapter 5.2 --- Method and Results --- p.36Chapter 5.3 --- Summary --- p.40Chapter 6 --- Clip-Art Compression Artifact Removal --- p.41Chapter 6.1 --- Related work --- p.41Chapter 6.2 --- Method and Results --- p.43Chapter 6.3 --- Summary --- p.46Chapter 7 --- Layer-Based Contrast Manipulation --- p.49Chapter 7.1 --- Related Work --- p.49Chapter 7.2 --- Method and Results --- p.50Chapter 7.2.1 --- Edge Adjustment --- p.51Chapter 7.2.2 --- Detail Magnification --- p.54Chapter 7.2.3 --- Tone Mapping --- p.55Chapter 7.3 --- Summary --- p.56Chapter 8 --- Conclusion and Discussion --- p.59Bibliography --- p.6

    Color image quality measures and retrieval

    Get PDF
    The focus of this dissertation is mainly on color image, especially on the images with lossy compression. Issues related to color quantization, color correction, color image retrieval and color image quality evaluation are addressed. A no-reference color image quality index is proposed. A novel color correction method applied to low bit-rate JPEG image is developed. A novel method for content-based image retrieval based upon combined feature vectors of shape, texture, and color similarities has been suggested. In addition, an image specific color reduction method has been introduced, which allows a 24-bit JPEG image to be shown in the 8-bit color monitor with 256-color display. The reduction in download and decode time mainly comes from the smart encoder incorporating with the proposed color reduction method after color space conversion stage. To summarize, the methods that have been developed can be divided into two categories: one is visual representation, and the other is image quality measure. Three algorithms are designed for visual representation: (1) An image-based visual representation for color correction on low bit-rate JPEG images. Previous studies on color correction are mainly on color image calibration among devices. Little attention was paid to the compressed image whose color distortion is evident in low bit-rate JPEG images. In this dissertation, a lookup table algorithm is designed based on the loss of PSNR in different compression ratio. (2) A feature-based representation for content-based image retrieval. It is a concatenated vector of color, shape, and texture features from region of interest (ROI). (3) An image-specific 256 colors (8 bits) reproduction for color reduction from 16 millions colors (24 bits). By inserting the proposed color reduction method into a JPEG encoder, the image size could be further reduced and the transmission time is also reduced. This smart encoder enables its decoder using less time in decoding. Three algorithms are designed for image quality measure (IQM): (1) A referenced IQM based upon image representation in very low-dimension. Previous studies on IQMs are based on high-dimensional domain including spatial and frequency domains. In this dissertation, a low-dimensional domain IQM based on random projection is designed, with preservation of the IQM accuracy in high-dimensional domain. (2) A no-reference image blurring metric. Based on the edge gradient, the degree of image blur can be measured. (3) A no-reference color IQM based upon colorfulness, contrast and sharpness

    Super-resolving Compressed Images via Parallel and Series Integration of Artifact Reduction and Resolution Enhancement

    Full text link
    In this paper, we propose a novel compressed image super resolution (CISR) framework based on parallel and series integration of artifact removal and resolution enhancement. Based on maximum a posterior inference for estimating a clean low-resolution (LR) input image and a clean high resolution (HR) output image from down-sampled and compressed observations, we have designed a CISR architecture consisting of two deep neural network modules: the artifact reduction module (ARM) and resolution enhancement module (REM). ARM and REM work in parallel with both taking the compressed LR image as their inputs, while they also work in series with REM taking the output of ARM as one of its inputs and ARM taking the output of REM as its other input. A unique property of our CSIR system is that a single trained model is able to super-resolve LR images compressed by different methods to various qualities. This is achieved by exploiting deep neural net-works capacity for handling image degradations, and the parallel and series connections between ARM and REM to reduce the dependency on specific degradations. ARM and REM are trained simultaneously by the deep unfolding technique. Experiments are conducted on a mixture of JPEG and WebP compressed images without a priori knowledge of the compression type and com-pression factor. Visual and quantitative comparisons demonstrate the superiority of our method over state-of-the-art super resolu-tion methods.Code link: https://github.com/luohongming/CISR_PS
    • …
    corecore