11,770 research outputs found

    Boosting Image Forgery Detection using Resampling Features and Copy-move analysis

    Full text link
    Realistic image forgeries involve a combination of splicing, resampling, cloning, region removal and other methods. While resampling detection algorithms are effective in detecting splicing and resampling, copy-move detection algorithms excel in detecting cloning and region removal. In this paper, we combine these complementary approaches in a way that boosts the overall accuracy of image manipulation detection. We use the copy-move detection method as a pre-filtering step and pass those images that are classified as untampered to a deep learning based resampling detection framework. Experimental results on various datasets including the 2017 NIST Nimble Challenge Evaluation dataset comprising nearly 10,000 pristine and tampered images shows that there is a consistent increase of 8%-10% in detection rates, when copy-move algorithm is combined with different resampling detection algorithms

    On the auxiliary particle filter

    Full text link
    In this article we study asymptotic properties of weighted samples produced by the auxiliary particle filter (APF) proposed by pitt and shephard (1999). Besides establishing a central limit theorem (CLT) for smoothed particle estimates, we also derive bounds on the Lp error and bias of the same for a finite particle sample size. By examining the recursive formula for the asymptotic variance of the CLT we identify first-stage importance weights for which the increase of asymptotic variance at a single iteration of the algorithm is minimal. In the light of these findings, we discuss and demonstrate on several examples how the APF algorithm can be improved.Comment: 26 page

    Some thoughts on the use of InSAR data to constrain models of surface deformation: Noise structure and data downsampling

    Get PDF
    Repeat-pass Interferometric Synthetic Aperture Radar (InSAR) provides spatially dense maps of surface deformation with potentially tens of millions of data points. Here we estimate the actual covariance structure of noise in InSAR data. We compare the results for several independent interferograms with a large ensemble of GPS observations of tropospheric delay and discuss how the common approaches used during processing of InSAR data affects the inferred covariance structure. Motivated by computational concerns associated with numerical modeling of deformation sources, we then combine the data-covariance information with the inherent resolution of an assumed source model to develop an efficient algorithm for spatially variable data resampling (or averaging). We illustrate these technical developments with two earthquake scenarios at different ends of the earthquake magnitude spectrum. For the larger events, our goal is to invert for the coseismic fault slip distribution. For smaller events, we infer the hypocenter location and moment. We compare the results of inversions using several different resampling algorithms, and we assess the importance of using the full noise covariance matrix

    Non-parametric statistical thresholding for sparse magnetoencephalography source reconstructions.

    Get PDF
    Uncovering brain activity from magnetoencephalography (MEG) data requires solving an ill-posed inverse problem, greatly confounded by noise, interference, and correlated sources. Sparse reconstruction algorithms, such as Champagne, show great promise in that they provide focal brain activations robust to these confounds. In this paper, we address the technical considerations of statistically thresholding brain images obtained from sparse reconstruction algorithms. The source power distribution of sparse algorithms makes this class of algorithms ill-suited to "conventional" techniques. We propose two non-parametric resampling methods hypothesized to be compatible with sparse algorithms. The first adapts the maximal statistic procedure to sparse reconstruction results and the second departs from the maximal statistic, putting forth a less stringent procedure that protects against spurious peaks. Simulated MEG data and three real data sets are utilized to demonstrate the efficacy of the proposed methods. Two sparse algorithms, Champagne and generalized minimum-current estimation (G-MCE), are compared to two non-sparse algorithms, a variant of minimum-norm estimation, sLORETA, and an adaptive beamformer. The results, in general, demonstrate that the already sparse images obtained from Champagne and G-MCE are further thresholded by both proposed statistical thresholding procedures. While non-sparse algorithms are thresholded by the maximal statistic procedure, they are not made sparse. The work presented here is one of the first attempts to address the problem of statistically thresholding sparse reconstructions, and aims to improve upon this already advantageous and powerful class of algorithm

    On the energy leakage of discrete wavelet transform

    Get PDF
    The energy leakage is an inherent deficiency of discrete wavelet transform (DWT) which is often ignored by researchers and practitioners. In this paper, a systematic investigation into the energy leakage is reported. The DWT is briefly introduced first, and then the energy leakage phenomenon is described using a numerical example as an illustration and its effect on the DWT results is discussed. Focusing on the Daubechies wavelet functions, the band overlap between the quadrature mirror analysis filters was studied and the results reveal that there is an unavoidable tradeoff between the band overlap degree and the time resolution for the DWT. The dependency of the energy leakage to the wavelet function order was studied by using a criterion defined to evaluate the severity of the energy leakage. In addition, a method based on resampling technique was proposed to relieve the effects of the energy leakage. The effectiveness of the proposed method has been validated by numerical simulation study and experimental study

    Resampled Priors for Variational Autoencoders

    Full text link
    We propose Learned Accept/Reject Sampling (LARS), a method for constructing richer priors using rejection sampling with a learned acceptance function. This work is motivated by recent analyses of the VAE objective, which pointed out that commonly used simple priors can lead to underfitting. As the distribution induced by LARS involves an intractable normalizing constant, we show how to estimate it and its gradients efficiently. We demonstrate that LARS priors improve VAE performance on several standard datasets both when they are learned jointly with the rest of the model and when they are fitted to a pretrained model. Finally, we show that LARS can be combined with existing methods for defining flexible priors for an additional boost in performance
    corecore