13 research outputs found

    Three layer authentications with a spiral block mapping to prove authenticity in medical images

    Get PDF
    Digital medical image has a potential to be manipulated by unauthorized persons due to advanced communication technology. Verifying integrity and authenticity have become important issues on the medical image. This paper proposed a self-embedding watermark using a spiral block mapping for tamper detection and restoration. The block-based coding with the size of 3x3 was applied to perform selfembedding watermark with two authentication bits and seven recovery bits. The authentication bits are obtained from a set of condition between sub-block and block image, and the parity bits of each sub-block. The authentication bits and the recovery bits are embedded in the least significant bits using the proposed spiral block mapping. The recovery bits are embedded into different sub-blocks based on a spiral block mapping. The watermarked images were tested under various tampered images such as blurred image, unsharp-masking, copy-move, mosaic, noise, removal, and sharpening. The experimental results show that the scheme achieved a PSNR value of about 51.29 dB and a SSIM value of about 0.994 on the watermarked image. The scheme showed tamper localization with accuracy of 93.8%. In addition, the proposed scheme does not require external information to perform recovery bits. The proposed scheme was able to recover the tampered image with a PSNR value of 40.45 dB and a SSIM value of 0.994

    AuSR1 : Authentication and self-recovery using a new image inpainting technique with LSB shifting in fragile image watermarking

    Get PDF
    With the rapid development of multimedia technology, editing and manipulating digital images have become more accessible than ever. This paper proposed color image authentication based on blind fragile image watermarking for tamper detection and self-recovery named AuSR1. The AuSR1 divides each channel of the cover image into non-overlapping blocks with the size of 2 × 2 pixels. The authentication data is embedded into the original block location, while the recovery data is embedded into the distant location from the original location based on the block mapping algorithm. The watermark data is then embedded into the 2 LSB to achieve high quality of the recovered image under tampering attacks. In addition, the permutation algorithm is applied to ensure the security of the watermark data. The AuSR1 utilizes a three-layer authentication algorithm to achieve a high detection rate. The experimental results show that the scheme produced a PSNR value of 45.57 dB and an SSIM value of 0.9972 of the watermarked images. Furthermore, the AuSR1 detected the tampered area of the images with a high precision value of 0.9943. In addition, the recovered image achieved a PSNR value of 27.64 dB and an SSIM value of 0.9339 on a 50% tampering rate

    A dual watermarking scheme for identity protection

    Get PDF
    A novel dual watermarking scheme with potential applications in identity protection, media integrity maintenance and copyright protection in both electronic and printed media is presented. The proposed watermarking scheme uses the owner’s signature and fingerprint as watermarks through which the ownership and validity of the media can be proven and kept intact. To begin with, the proposed watermarking scheme is implemented on continuous-tone/greyscale images, and later extended to images achieved via multitoning, an advanced version of halftoning-based printing. The proposed watermark embedding is robust and imperceptible. Experimental simulations and evaluations of the proposed method show excellent results from both objective and subjective view-points

    Statistical Tools for Digital Image Forensics

    Get PDF
    A digitally altered image, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic image. The tampering, however, may disturb some underlying statistical properties of the image. Under this assumption, we propose five techniques that quantify and detect statistical perturbations found in different forms of tampered images: (1) re-sampled images (e.g., scaled or rotated); (2) manipulated color filter array interpolated images; (3) double JPEG compressed images; (4) images with duplicated regions; and (5) images with inconsistent noise patterns. These techniques work in the absence of any embedded watermarks or signatures. For each technique we develop the theoretical foundation, show its effectiveness on credible forgeries, and analyze its sensitivity and robustness to simple counter-attacks

    Um framework para processamento paralelo de algoritmos de aumento de resolução de vídeos

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2013.O aumento dimensional de sinais visuais consiste na alteração do tamanho de uma imagem ou de um vídeo para dimensões espaciais maiores, utilizando técnicas de processa- mento digital de sinais. Geralmente, esse aumento é feito com a utilização de técnicas de interpolação. Contudo, essas técnicas de interpolação produzem distorções nas imagens au- mentadas. Tais distorções ocorrem porque a imagem aumentada possui apenas as amostras da imagem original, de dimensões menores, que são insu cientes para reconstrução exata do sinal, o que gera efeitos de aliasing. Assim sendo, as técnicas de interpolação apenas estimam os coe cientes não-amostrados do sinal, o que muitas vezes produz resultados insatisfatórios para muitas aplicações, necessitando de outras técnicas para reconstituir os coe cientes não-amostrados com maior precisão. Para melhorar a aproximação de uma imagem estimada com relação à imagem origi- nal, existem técnicas que reconstroem os coe cientes não-amostrados. Essas técnicas são chamadas de super-resolução. Elas consistem em aumentar a resolução utilizando, geral- mente, informações de outras imagens em baixa ou alta-resolução para estimar a informação faltante na imagem que se deseja ampliar. Super-resolução é um processo computacionalmente intenso, onde a complexidade dos algoritmos são, geralmente, de ordem exponencial no tempo em função do bloco ou do fa- tor de ampliação. Portanto, quando essas técnicas são aplicadas para vídeos, é necessário que o algoritmo seja extremamente rápido. O problema é que os algoritmos mais com- putacionalmente e cientes, nem sempre são aqueles que produzem os melhores resultados visuais. Sendo assim, este trabalho propõe um framework para melhorar o desempenho de diversos algoritmos de super-resolução através de estratégias de processamento seletivo e paralelo. Para isso, nesta dissertação são examinadas as propriedades dos resultados produzidos pelos algoritmos de super-resolução e os resultados produzidos utilizando-se técnicas de interpolação. Com essas propriedades, é encontrado um critério para classi car as regiões em que os resultados produzidos sejam visualmente equivalentes, não importando o método utilizado para ampliação. Nessas regiões de equivalência utiliza-se um algoritmo de interpolação, que é muito mais veloz do que os computacionalmente complexos de super-resolução. Assim, consegue-se reduzir o tempo de processamento sem prejudicar a qualidade visual do vídeo ampliado. Além dessa abordagem, este trabalho também propõe uma estratégia de divisão de dados entre diferentes tarefas para que a operação de aumento de resolução seja realizada de forma paralela. Um resultado interessante do modelo proposto é que ele desacopla a abstração de distribuição de carga da função de aumento dimensional. Em outras palavras, diferentes métodos de super-resolução podem explorar os recursos do framework sem que para isso seus algoritmos precisem ser modi cados para obtenção do paralelismo. Isso torna o framework portável, escalável e reusável por diferentes métodos de super-resolução. ______________________________________________________________________________ ABSTRACTThe magni cation of visual signals consists of changing the size of an image or a video to larger spatial dimensions, using digital signal processing techniques. Usually, this mag- ni cation is done using numerical interpolation methods. However, these interpolation methods tend to produce some distortions in the increased images. Such distortions oc- cours because the interpolated image is reconstructed using only the original image samples, which are insu cients for the accurate signal reconstruction, generating aliasing e ects. These interpolation techniques only approximate the non-sampled signal coe cients, pro- ducing unsatisfactory results for many applications. Thus, for these applications, others techniques to estimate the non-sampled coe cients are needed. To improve the estimation accuracy of an image with respect to the original, the super- resolution techniques are used to reconstruct the non-sampled coe cients. Generally, these super-resolution techniques enhance the increased image using information of other images to estimate the missing information. Super-resolution is a computationally intensive process, where the algorithms com- plexity are, generally, exponential in time as function of the block size or magni cation factor. Therefore, when these techniques are applied for videos, it is required that the super-resolution algorithm be extremely fast. However, more computationally e cient algorithms are not always those that produce the best visual results. Therefore, this work proposes a framework to improve the performance of various super- resolution algorithms using selective processing and parallel processing strategies. Thus, this dissertation examines the properties of the results produced by the super-resolution algorithms and the results produced by using interpolation techniques. From these proper- ties, is achieved a criterion to classify regions wherein the results produced are equivalent (using both super-resolution or interpolation). In these regions of equivalence, the in- terpolation algorithms are used to increase the dimensions. In the anothers regions, the super-resolution algorithms are used. As interpolation algorithms are faster than the com- putationally complex super-resolution algorithms, the idea is decrease the processing time without a ecting the visual quality of ampli ed video. Besides this approach, this paper also proposes a strategy to divide the data among various processes to perform the super-resolution operation in parallel. An interesting re- sult of the proposed model is the decoupling of the super-resolution algorithm and the parallel processing strategy. In other words, di erent super-resolution algorithms can ex- plore the features of the proposed framework without algorithmic modi cations to achieve the parallelism. Thus, the framework is portable, scalable and can be reusable by di erent super-resolution methods

    Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints

    Get PDF
    Digital imaging has experienced tremendous growth in recent decades, and digital images have been used in a growing number of applications. With such increasing popularity of imaging devices and the availability of low-cost image editing software, the integrity of image content can no longer be taken for granted. A number of forensic and provenance questions often arise, including how an image was generated; from where an image was from; what has been done on the image since its creation, by whom, when and how. This thesis presents two different sets of techniques to address the problem via intrinsic and extrinsic fingerprints. The first part of this thesis introduces a new methodology based on intrinsic fingerprints for forensic analysis of digital images. The proposed method is motivated by the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on the final output data. We present methods to identify these intrinsic fingerprints via component forensic analysis, and demonstrate that these traces can serve as useful features for such forensic applications as to build a robust device identifier and to identify potential technology infringement or licensing. Building upon component forensics, we develop a general authentication and provenance framework to reconstruct the processing history of digital images. We model post-device processing as a manipulation filter and estimate its coefficients using a linear time invariant approximation. Absence of in-device fingerprints, presence of new post-device fingerprints, or any inconsistencies in the estimated fingerprints across different regions of the test image all suggest that the image is not a direct device output and has possibly undergone some kind of processing, such as content tampering or steganographic embedding, after device capture. While component forensics is widely applicable in a number of scenarios, it has performance limitations. To understand the fundamental limits of component forensics, we develop a new theoretical framework based on estimation and pattern classification theories, and define formal notions of forensic identifiability and classifiability of components. We show that the proposed framework provides a solid foundation to study information forensics and helps design optimal input patterns to improve parameter estimation accuracy via semi non-intrusive forensics. The final part of the thesis investigates a complementing extrinsic approach via image hashing that can be used for content-based image authentication and other media security applications. We show that the proposed hashing algorithm is robust to common signal processing operations and present a systematic evaluation of the security of image hash against estimation and forgery attacks

    Source identification in image forensics

    Get PDF
    Source identification is one of the most important tasks in digital image forensics. In fact, the ability to reliably associate an image with its acquisition device may be crucial both during investigations and before a court of law. For example, one may be interested in proving that a certain photo was taken by his/her camera, in order to claim intellectual property. On the contrary, it may be law enforcement agencies that are interested to trace back the origin of some images, because they violate the law themselves (e.g. do not respect privacy laws), or maybe they point to subjects involved in unlawful and dangerous activities (like terrorism, pedo-pornography, etc). More in general, proving, beyond reasonable doubts, that a photo was taken by a given camera, may be an important element for decisions in court. The key assumption of forensic source identification is that acquisition devices leave traces in the acquired content, and that instances of these traces are specific to the respective (class of) device(s). This kind of traces is present in the so-called device fingerprint. The name stems from the forensic value of human fingerprints. Motivated by the importance of the source identification in digital image forensics community and the need of reliable techniques using device fingerprint, the work developed in the Ph.D. thesis concerns different source identification level, using both feature-based and PRNU-based approach for model and device identification. In addition, it is also shown that counter-forensics methods can easily attack machine learning techniques for image forgery detection. In model identification, an analysis of hand-crafted local features and deep learning ones has been considered for the basic two-class classification problem. In addition, a comparison with the limited knowledge and the blind scenario are presented. Finally, an application of camera model identification on various iris sensor models is conducted. A blind scenario technique that faces the problem of device source identification using the PRNU-based approach is also proposed. With the use of the correlation between single-image sensor noise, a blind two-step source clustering is proposed. In the first step correlation clustering together with ensemble method is used to obtain an initial partition, which is then refined in the second step by means of a Bayesian approach. Experimental results show that this proposal outperforms the state-of-the-art techniques and still give an acceptable performance when considering images downloaded from Facebook

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    corecore