30,357 research outputs found

    Camera model identification based on the generalized noise model in natural images

    Get PDF
    International audienceThe goal of this paper is to design a statistical test for the camera model identification problem. The approach is based on the generalized noise model that is developed by following the image processing pipeline of the digital camera. More specifically, this model is given by starting from the heteroscedastic noise model that describes the linear relation between the expectation and variance of a RAW pixel and taking into account the non-linear effect of gamma correction.The generalized noise model characterizes more accurately a natural image in TIFF or JPEG format. The present paper is similar to our previous work that was proposed for camera model identification from RAW images based on the heteroscedastic noise model. The parameters that are specified in the generalized noise model are used as camera fingerprint to identify camera models. The camera model identification problem is cast in the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the Likelihood Ratio Test is presented and its statistical performances are theoretically established. In practice when the model parameters are unknown, two Generalized Likelihood Ratio Tests are designed to deal with this difficulty such that they can meet a prescribed false alarm probability while ensuring a high detection performance. Numerical results on simulated images and real natural JPEG images highlight the relevance of the proposed approac

    Individual camera device identification from JPEG images

    Get PDF
    International audienceThe goal of this paper is to investigate the problem of source camera device identification for natural images in JPEG format. We propose an improved signal-dependent noise model describing the statistical distribution of pixels from a JPEG image. The noise model relies on the heteroscedastic noise parameters, that relates the variance of pixels’ noise with the expectation considered as unique fingerprints. It is also shown in the present paper that, non-linear response of pixels can be captured by characterizing the linear relation because those heteroscedastic parameters, which are used to identify source camera device. The identification problem is cast within the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the Likelihood Ratio Test (LRT) is presented and its performance is theoretically established. The statistical performance of LRT serves as an upper bound of the detection power. In a practical identification, when the nuisance parameters are unknown, two generalized LRTs based on estimation of those parameters are established. Numerical results on simulated data and real natural images highlight the relevance of our proposed approach. While those results show a first positive proof of concept of the method, it still requires to be extended for a relevant comparison with PRNU-based approaches that benefit from years of experience

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Camera model identification based on DCT coefficient statistics

    Get PDF
    International audienceThe goal of this paper is to design a statistical test for the camera model identification problem from JPEG images. The approach relies on the camera fingerprint extracted in the Discrete Cosine Transform (DCT) domain based on the state-of-the-art model of DCT coefficients. The camera model identification problem is cast in the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the Likelihood Ratio Test is presented and its performances are theoretically established. For a practical use, two Generalized Likelihood Ratio Tests are designed to deal with unknown model parameters such that they can meet a prescribed false alarm probability while ensuring a high detection performance. Numerical results on simulated and real JPEG images highlight the relevance of the proposed approach

    Image blur estimation based on the average cone of ratio in the wavelet domain

    Get PDF
    In this paper, we propose a new algorithm for objective blur estimation using wavelet decomposition. The central idea of our method is to estimate blur as a function of the center of gravity of the average cone ratio (ACR) histogram. The key properties of ACR are twofold: it is powerful in estimating local edge regularity, and it is nearly insensitive to noise. We use these properties to estimate the blurriness of the image, irrespective of the level of noise. In particular, the center of gravity of the ACR histogram is a blur metric. The method is applicable both in case where the reference image is available and when there is no reference. The results demonstrate a consistent performance of the proposed metric for a wide class of natural images and in a wide range of out of focus blurriness. Moreover, the proposed method shows a remarkable insensitivity to noise compared to other wavelet domain methods
    • …
    corecore