4,318 research outputs found

    Fragile watermarking for image authentication using dyadic walsh ordering

    Get PDF
    A digital image is subjected to the most manipulation. This is driven by the easy manipulating process through image editing software which is growing rapidly. These problems can be solved through the watermarking model as an active authentication system for the image. One of the most popular methods is Singular Value Decomposition (SVD) which has good imperceptibility and detection capabilities. Nevertheless, SVD has high complexity and can only utilize one singular matrix S, and ignore two orthogonal matrices. This paper proposes the use of the Walsh matrix with dyadic ordering to generate a new S matrix without the orthogonal matrices. The experimental results showed that the proposed method was able to reduce computational time by 22% and 13% compared to the SVD-based method and similar methods based on the Hadamard matrix respectively. This research can be used as a reference to speed up the computing time of the watermarking methods without compromising the level of imperceptibility and authentication

    An Evaluation of Popular Copy-Move Forgery Detection Approaches

    Full text link
    A copy-move forgery is created by copying and pasting content within the same image, and potentially post-processing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and Zernike features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions.Comment: Main paper: 14 pages, supplemental material: 12 pages, main paper appeared in IEEE Transaction on Information Forensics and Securit

    e-Counterfeit: a mobile-server platform for document counterfeit detection

    Full text link
    This paper presents a novel application to detect counterfeit identity documents forged by a scan-printing operation. Texture analysis approaches are proposed to extract validation features from security background that is usually printed in documents as IDs or banknotes. The main contribution of this work is the end-to-end mobile-server architecture, which provides a service for non-expert users and therefore can be used in several scenarios. The system also provides a crowdsourcing mode so labeled images can be gathered, generating databases for incremental training of the algorithms.Comment: 6 pages, 5 figure

    Use of multiple singular value decompositions to analyze complex intracellular calcium ion signals

    Get PDF
    We compare calcium ion signaling (Ca2+\mathrm {Ca}^{2+}) between two exposures; the data are present as movies, or, more prosaically, time series of images. This paper describes novel uses of singular value decompositions (SVD) and weighted versions of them (WSVD) to extract the signals from such movies, in a way that is semi-automatic and tuned closely to the actual data and their many complexities. These complexities include the following. First, the images themselves are of no interest: all interest focuses on the behavior of individual cells across time, and thus, the cells need to be segmented in an automated manner. Second, the cells themselves have 100++ pixels, so that they form 100++ curves measured over time, so that data compression is required to extract the features of these curves. Third, some of the pixels in some of the cells are subject to image saturation due to bit depth limits, and this saturation needs to be accounted for if one is to normalize the images in a reasonably unbiased manner. Finally, the Ca2+\mathrm {Ca}^{2+} signals have oscillations or waves that vary with time and these signals need to be extracted. Thus, our aim is to show how to use multiple weighted and standard singular value decompositions to detect, extract and clarify the Ca2+\mathrm {Ca}^{2+} signals. Our signal extraction methods then lead to simple although finely focused statistical methods to compare Ca2+\mathrm {Ca}^{2+} signals across experimental conditions.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS253 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore