32 research outputs found

    Counter-forensics of SIFT-based copy-move detection by means of keypoint classification

    Get PDF
    Copy-move forgeries are very common image manipulations that are often carried out with malicious intents. Among the techniques devised by the 'Image Forensic' community, those relying on scale invariant feature transform (SIFT) features are the most effective ones. In this paper, we approach the copy-move scenario from the perspective of an attacker whose goal is to remove such features. The attacks conceived so far against SIFT-based forensic techniques implicitly assume that all SIFT keypoints have similar properties. On the contrary, we base our attacking strategy on the observation that it is possible to classify them in different typologies. Also, one may devise attacks tailored to each specific SIFT class, thus improving the performance in terms of removal rate and visual quality. To validate our ideas, we propose to use a SIFT classification scheme based on the gray scale histogram of the neighborhood of SIFT keypoints. Once the classification is performed, we then attack the different classes by means of class-specific methods. Our experiments lead to three interesting results: (1) there is a significant advantage in using SIFT classification, (2) the classification-based attack is robust against different SIFT implementations, and (3) we are able to impair a state-of-the-art SIFT-based copy-move detector in realistic cases

    Schémas de tatouage d'images, schémas de tatouage conjoint à la compression, et schémas de dissimulation de données

    Get PDF
    In this manuscript we address data-hiding in images and videos. Specifically we address robust watermarking for images, robust watermarking jointly with compression, and finally non robust data-hiding.The first part of the manuscript deals with high-rate robust watermarking. After having briefly recalled the concept of informed watermarking, we study the two major watermarking families : trellis-based watermarking and quantized-based watermarking. We propose, firstly to reduce the computational complexity of the trellis-based watermarking, with a rotation based embedding, and secondly to introduce a trellis-based quantization in a watermarking system based on quantization.The second part of the manuscript addresses the problem of watermarking jointly with a JPEG2000 compression step or an H.264 compression step. The quantization step and the watermarking step are achieved simultaneously, so that these two steps do not fight against each other. Watermarking in JPEG2000 is achieved by using the trellis quantization from the part 2 of the standard. Watermarking in H.264 is performed on the fly, after the quantization stage, choosing the best prediction through the process of rate-distortion optimization. We also propose to integrate a Tardos code to build an application for traitors tracing.The last part of the manuscript describes the different mechanisms of color hiding in a grayscale image. We propose two approaches based on hiding a color palette in its index image. The first approach relies on the optimization of an energetic function to get a decomposition of the color image allowing an easy embedding. The second approach consists in quickly obtaining a color palette of larger size and then in embedding it in a reversible way.Dans ce manuscrit nous abordons l’insertion de donnĂ©es dans les images et les vidĂ©os. Plus particuliĂšrement nous traitons du tatouage robuste dans les images, du tatouage robuste conjointement Ă  la compression et enfin de l’insertion de donnĂ©es (non robuste).La premiĂšre partie du manuscrit traite du tatouage robuste Ă  haute capacitĂ©. AprĂšs avoir briĂšvement rappelĂ© le concept de tatouage informĂ©, nous Ă©tudions les deux principales familles de tatouage : le tatouage basĂ© treillis et le tatouage basĂ© quantification. Nous proposons d’une part de rĂ©duire la complexitĂ© calculatoire du tatouage basĂ© treillis par une approche d’insertion par rotation, ainsi que d’autre part d’introduire une approche par quantification basĂ©e treillis au seind’un systĂšme de tatouage basĂ© quantification.La deuxiĂšme partie du manuscrit aborde la problĂ©matique de tatouage conjointement Ă  la phase de compression par JPEG2000 ou par H.264. L’idĂ©e consiste Ă  faire en mĂȘme temps l’étape de quantification et l’étape de tatouage, de sorte que ces deux Ă©tapes ne « luttent pas » l’une contre l’autre. Le tatouage au sein de JPEG2000 est effectuĂ© en dĂ©tournant l’utilisation de la quantification basĂ©e treillis de la partie 2 du standard. Le tatouage au sein de H.264 est effectuĂ© Ă  la volĂ©e, aprĂšs la phase de quantification, en choisissant la meilleure prĂ©diction via le processus d’optimisation dĂ©bit-distorsion. Nous proposons Ă©galement d’intĂ©grer un code de Tardos pour construire une application de traçage de traĂźtres.La derniĂšre partie du manuscrit dĂ©crit les diffĂ©rents mĂ©canismes de dissimulation d’une information couleur au sein d’une image en niveaux de gris. Nous proposons deux approches reposant sur la dissimulation d’une palette couleur dans son image d’index. La premiĂšre approche consiste Ă  modĂ©liser le problĂšme puis Ă  l’optimiser afin d’avoir une bonne dĂ©composition de l’image couleur ainsi qu’une insertion aisĂ©e. La seconde approche consiste Ă  obtenir, de maniĂšre rapide et sĂ»re, une palette de plus grande dimension puis Ă  l’insĂ©rer de maniĂšre rĂ©versible

    Copyright Protection of 3D Digitized Artistic Sculptures by Adding Unique Local Inconspicuous Errors by Sculptors

    Get PDF
    In recent years, digitization of cultural heritage objects, for the purpose of creating virtual museums, is becoming increasingly popular. Moreover, cultural institutions use modern digitization methods to create three-dimensional (3D) models of objects of historical significance to form digital libraries and archives. This research aims to suggest a method for protecting these 3D models from abuse while making them available on the Internet. The proposed method was applied to a sculpture, an object of cultural heritage. It is based on the digitization of the sculpture altered by adding local clay details proposed by the sculptor and on sharing on the Internet a 3D model obtained by digitizing the sculpture with a built-in error. The clay details embedded in the sculpture are asymmetrical and discreet to be unnoticeable to an average observer. The original sculpture was also digitized and its 3D model created. The obtained 3D models were compared and the geometry deviation was measured to determine that the embedded error was invisible to an average observer and that the watermark can be extracted. The proposed method simultaneously protects the digitized image of the artwork while preserving its visual experience. Other methods cannot guarantee this

    Copyright Protection of 3D Digitized Artistic Sculptures by Adding Unique Local Inconspicuous Errors by Sculptors

    Get PDF
    In recent years, digitization of cultural heritage objects, for the purpose of creating virtual museums, is becoming increasingly popular. Moreover, cultural institutions use modern digitization methods to create three-dimensional (3D) models of objects of historical significance to form digital libraries and archives. This research aims to suggest a method for protecting these 3D models from abuse while making them available on the Internet. The proposed method was applied to a sculpture, an object of cultural heritage. It is based on the digitization of the sculpture altered by adding local clay details proposed by the sculptor and on sharing on the Internet a 3D model obtained by digitizing the sculpture with a built-in error. The clay details embedded in the sculpture are asymmetrical and discreet to be unnoticeable to an average observer. The original sculpture was also digitized and its 3D model created. The obtained 3D models were compared and the geometry deviation was measured to determine that the embedded error was invisible to an average observer and that the watermark can be extracted. The proposed method simultaneously protects the digitized image of the artwork while preserving its visual experience. Other methods cannot guarantee this

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    3D printing-as-a-service for collaborative engineering

    Get PDF
    3D printing or Additive Manufacturing (AM) are utilised as umbrella terms to denote a variety of technologies to manufacture or create a physical object based on a digital model. Commonly, these technologies create the objects by adding, fusing or melting a raw material in a layer-wise fashion. Apart from the 3D printer itself, no specialised tools are required to create almost any shape or form imaginable and designable. The possibilities of these technologies of these technologies are plentiful and cover the ability to manufacture every object, rapidly, locally and cost-efficiently without wasted resources and material. Objects can be created to specific forms to perform as perfectly fitting functions without consideration of the assembly process. To further the advance the availability and applicability of 3D printing, this thesis identifies the problems that currently exist and attempts to solve them. During the 3D printing process, data (i. e., files) must be converted from their original representation, e. g., CAD file, to the machine instructions for a specific 3D printer. During this process, information is lost, and other information is added. Traceability is lacking in 3D printing. The actual 3D printing can require a long period of time to complete, during which errors can occur. In 3D printing, these errors are often non-recoverable or reversible, which results in wasted material and time. In addition to the lack of closed-loop control systems for 3D printers, careful planning and preparation are required to avoid these costly misprints. 3D printers are usually located remotely from users, due to health and safety considerations, special placement requirements or out of comfort. Remotely placed equipment is impractical to monitor in person; however, such monitoring is essential. Especially considering the proneness of 3D printing to errors and the implications of this as described previously. Utilisation of 3D printers is an issue, especially with expensive 3D printers. As there are a number of differing 3D printing technologies available, having the required 3D printer, might be problematic. 3D printers are equipped with a variety of interfaces, depending on the make and model. These differing interfaces, both hard- and software, hinder the integration of different 3D printers into consistent systems. There exists no proper and complete ontology or resource description schema or mechanism that covers all the different 3D printing technologies. Such a resource description mechanism is essential for the automated scheduling in services or systems. In 3D printing services the selection and matching of appropriate and suitable 3D printers is essential, as not all 3D printing technologies are able to perform on all materials or are able to create certain object features, such as thin walls or hollow forms. The need for companies to sell digital models for AM will increase in scenarios where replacement or customised parts are 3D printed by consumers at home or in local manufacturing centres. Furthermore, requirements to safeguard these digital models will increase to avoid a repetition of the problems from the music industry, e. g., Napster. Replication and ‘theft’ of these models are uncontrollable in the current situation. In a service oriented deployment, or in scenarios where the utilisation is high, estimations of the 3D printing time are required to be available. Common 3D printing time estimations are inaccurate, which hinder the application of scheduling. The complete and comprehensive understanding of the complexity of an object is discordant, especially in the domain of AM. This understanding is required to both support the design of objects for AM and match appropriate manufacturing resources to certain objects. Quality in AM and FDM have been incompletely researched. The quality in general is increased with maturity of the technology; however, research on the quality achievable with consumer-grade 3D printers is lacking. Furthermore, cost-sensitive measurement methods for quality assessment are expandable. This thesis presents the structured design and implementation of a 3D printing service with associated contributions that provide solutions to particular problems present in the AM domain. The 3D printing service is the overarching component of this thesis and provides the platform for the other contributions with the intention to establish an online, cloud-based 3D printing service for use in end-user and professional settings with a focus on collaboration and cooperation

    Digital Watermarking for Verification of Perception-based Integrity of Audio Data

    Get PDF
    In certain application fields digital audio recordings contain sensitive content. Examples are historical archival material in public archives that preserve our cultural heritage, or digital evidence in the context of law enforcement and civil proceedings. Because of the powerful capabilities of modern editing tools for multimedia such material is vulnerable to doctoring of the content and forgery of its origin with malicious intent. Also inadvertent data modification and mistaken origin can be caused by human error. Hence, the credibility and provenience in terms of an unadulterated and genuine state of such audio content and the confidence about its origin are critical factors. To address this issue, this PhD thesis proposes a mechanism for verifying the integrity and authenticity of digital sound recordings. It is designed and implemented to be insensitive to common post-processing operations of the audio data that influence the subjective acoustic perception only marginally (if at all). Examples of such operations include lossy compression that maintains a high sound quality of the audio media, or lossless format conversions. It is the objective to avoid de facto false alarms that would be expectedly observable in standard crypto-based authentication protocols in the presence of these legitimate post-processing. For achieving this, a feasible combination of the techniques of digital watermarking and audio-specific hashing is investigated. At first, a suitable secret-key dependent audio hashing algorithm is developed. It incorporates and enhances so-called audio fingerprinting technology from the state of the art in contentbased audio identification. The presented algorithm (denoted as ”rMAC” message authentication code) allows ”perception-based” verification of integrity. This means classifying integrity breaches as such not before they become audible. As another objective, this rMAC is embedded and stored silently inside the audio media by means of audio watermarking technology. This approach allows maintaining the authentication code across the above-mentioned admissible post-processing operations and making it available for integrity verification at a later date. For this, an existent secret-key ependent audio watermarking algorithm is used and enhanced in this thesis work. To some extent, the dependency of the rMAC and of the watermarking processing from a secret key also allows authenticating the origin of a protected audio. To elaborate on this security aspect, this work also estimates the brute-force efforts of an adversary attacking this combined rMAC-watermarking approach. The experimental results show that the proposed method provides a good distinction and classification performance of authentic versus doctored audio content. It also allows the temporal localization of audible data modification within a protected audio file. The experimental evaluation finally provides recommendations about technical configuration settings of the combined watermarking-hashing approach. Beyond the main topic of perception-based data integrity and data authenticity for audio, this PhD work provides new general findings in the fields of audio fingerprinting and digital watermarking. The main contributions of this PhD were published and presented mainly at conferences about multimedia security. These publications were cited by a number of other authors and hence had some impact on their works
    corecore