1,409 research outputs found

    The application of low-altitude near-infrared aerial photography for detecting clandestine burials using a UAV and low-cost unmodified digital camera

    Get PDF
    Aerial photography and remote sensing has been carried out in the past by numerous different platforms, utilizing imaging from across the electromagnetic (EM) spectrum to gain information about the earth. These techniques have additionally been found effective when locating mass graves and single clandestine graves created by perpetrators when concealing homicide victims. Applications for performing aerial photography and remote sensing are costly and therefore usually overlooked by police investigators, resulting in employing more contemporary geophysical methods for locating burials. Recent advances in technology however have seen the development of small Unmanned Aerial Vehicles (UAVs) for aerial photography which can be executed at low altitude and controlled remotely from the surface. This development has introduced low-cost approaches in detecting surface features, commonly utilised in the archaeological field for its accuracy in detecting anomalies, particularly when using near-infrared (NIR) photography. NIR aerial images have been shown to expose cropmarks of historical value which are unnoticeable in conventional colour photography, deriving from the visual area of the EM spectrum. However, little attempt has been made to investigate the practice of NIR photography to detect clandestine graves using low-cost aerial platforms in the form of UAVs. This paper considers adopting a low-cost and non-invasive approach to detect clandestine graves through the implementation of a small UAV and an unmodified GoPro camera fixed with a near-infrared filter. The results presented here have recognised real-time suitability for using UAVs as an aerial photographic platform in the forensic archaeological field as well as noting the advantage of NIR photography as an ongoing technique for discriminating recent graves from their surroundings

    Resiliency Assessment and Enhancement of Intrinsic Fingerprinting

    Get PDF
    Intrinsic fingerprinting is a class of digital forensic technology that can detect traces left in digital multimedia data in order to reveal data processing history and determine data integrity. Many existing intrinsic fingerprinting schemes have implicitly assumed favorable operating conditions whose validity may become uncertain in reality. In order to establish intrinsic fingerprinting as a credible approach to digital multimedia authentication, it is important to understand and enhance its resiliency under unfavorable scenarios. This dissertation addresses various resiliency aspects that can appear in a broad range of intrinsic fingerprints. The first aspect concerns intrinsic fingerprints that are designed to identify a particular component in the processing chain. Such fingerprints are potentially subject to changes due to input content variations and/or post-processing, and it is desirable to ensure their identifiability in such situations. Taking an image-based intrinsic fingerprinting technique for source camera model identification as a representative example, our investigations reveal that the fingerprints have a substantial dependency on image content. Such dependency limits the achievable identification accuracy, which is penalized by a mismatch between training and testing image content. To mitigate such a mismatch, we propose schemes to incorporate image content into training image selection and significantly improve the identification performance. We also consider the effect of post-processing against intrinsic fingerprinting, and study source camera identification based on imaging noise extracted from low-bit-rate compressed videos. While such compression reduces the fingerprint quality, we exploit different compression levels within the same video to achieve more efficient and accurate identification. The second aspect of resiliency addresses anti-forensics, namely, adversarial actions that intentionally manipulate intrinsic fingerprints. We investigate the cost-effectiveness of anti-forensic operations that counteract color interpolation identification. Our analysis pinpoints the inherent vulnerabilities of color interpolation identification, and motivates countermeasures and refined anti-forensic strategies. We also study the anti-forensics of an emerging space-time localization technique for digital recordings based on electrical network frequency analysis. Detection schemes against anti-forensic operations are devised under a mathematical framework. For both problems, game-theoretic approaches are employed to characterize the interplay between forensic analysts and adversaries and to derive optimal strategies. The third aspect regards the resilient and robust representation of intrinsic fingerprints for multiple forensic identification tasks. We propose to use the empirical frequency response as a generic type of intrinsic fingerprint that can facilitate the identification of various linear and shift-invariant (LSI) and non-LSI operations

    Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints

    Get PDF
    Digital imaging has experienced tremendous growth in recent decades, and digital images have been used in a growing number of applications. With such increasing popularity of imaging devices and the availability of low-cost image editing software, the integrity of image content can no longer be taken for granted. A number of forensic and provenance questions often arise, including how an image was generated; from where an image was from; what has been done on the image since its creation, by whom, when and how. This thesis presents two different sets of techniques to address the problem via intrinsic and extrinsic fingerprints. The first part of this thesis introduces a new methodology based on intrinsic fingerprints for forensic analysis of digital images. The proposed method is motivated by the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on the final output data. We present methods to identify these intrinsic fingerprints via component forensic analysis, and demonstrate that these traces can serve as useful features for such forensic applications as to build a robust device identifier and to identify potential technology infringement or licensing. Building upon component forensics, we develop a general authentication and provenance framework to reconstruct the processing history of digital images. We model post-device processing as a manipulation filter and estimate its coefficients using a linear time invariant approximation. Absence of in-device fingerprints, presence of new post-device fingerprints, or any inconsistencies in the estimated fingerprints across different regions of the test image all suggest that the image is not a direct device output and has possibly undergone some kind of processing, such as content tampering or steganographic embedding, after device capture. While component forensics is widely applicable in a number of scenarios, it has performance limitations. To understand the fundamental limits of component forensics, we develop a new theoretical framework based on estimation and pattern classification theories, and define formal notions of forensic identifiability and classifiability of components. We show that the proposed framework provides a solid foundation to study information forensics and helps design optimal input patterns to improve parameter estimation accuracy via semi non-intrusive forensics. The final part of the thesis investigates a complementing extrinsic approach via image hashing that can be used for content-based image authentication and other media security applications. We show that the proposed hashing algorithm is robust to common signal processing operations and present a systematic evaluation of the security of image hash against estimation and forgery attacks

    Color-decoupled photo response non-uniformity for digital image forensics

    Get PDF
    The last few years have seen the use of photo response non-uniformity noise (PRNU), a unique fingerprint of imaging sensors, in various digital forensic applications such as source device identification, content integrity verification and authentication. However, the use of a colour filter array for capturing only one of the three colour components per pixel introduces colour interpolation noise, while the existing methods for extracting PRNU provide no effective means for addressing this issue. Because the artificial colours obtained through the colour interpolation process is not directly acquired from the scene by physical hardware, we expect that the PRNU extracted from the physical components, which are free from interpolation noise, should be more reliable than that from the artificial channels, which carry interpolation noise. Based on this assumption we propose a Couple-Decoupled PRNU (CD-PRNU) extraction method, which first decomposes each colour channel into 4 sub-images and then extracts the PRNU noise from each sub-image. The PRNU noise patterns of the sub-images are then assembled to get the CD-PRNU. This new method can prevent the interpolation noise from propagating into the physical components, thus improving the accuracy of device identification and image content integrity verification

    Fingerprint recognition: A study on image enhancement and minutiae extraction

    Get PDF
    Fingerprints are a great source for identification of individuals. Fingerprint recognition is one of the oldest forms of biometric identification. However obtaining a good fingerprint image is not always easy. So the fingerprint image must be preprocessed before matching. The objective of this project is to present a better and enhanced fingerprint image. We have studied the factors relating to obtaining high performance feature points detection algorithm, such as image quality, segmentation, image enhancement and feature detection. Commonly used features for improving fingerprint image quality are Fourier spectrum energy, Gabor filter energy and local orientation. Accurate segmentation of fingerprint ridges from noisy background is necessary. For efficient enhancement and feature extraction algorithms, the segmented features must be void of any noise. A preprocessing method consisting of field orientation, ridge frequency estimation, Gabor filtering, segmentation and enhancement is performed. The obtained image is applied to a thinning algorithm and subsequent minutiae extraction. The methodology of image preprocessing and minutiae extraction is discussed. The simulations are performed in the MATLAB environment to evaluate the performance of the implemented algorithms. Results and observations of the fingerprint images are presented at the end

    Exposing image forgery by detecting traces of feather operation

    Get PDF
    Powerful digital image editing tools make it very easy to produce a perfect image forgery. The feather operation is necessary when tampering an image by copy–paste operation because it can help the boundary of pasted object to blend smoothly and unobtrusively with its surroundings. We propose a blind technique capable of detecting traces of feather operation to expose image forgeries. We model the feather operation, and the pixels of feather region will present similarity in their gradient phase angle and feather radius. An effectual scheme is designed to estimate each feather region pixel׳s gradient phase angle and feather radius, and the pixel׳s similarity to its neighbor pixels is defined and used to distinguish the feathered pixels from un-feathered pixels. The degree of image credibility is defined, and it is more acceptable to evaluate the reality of one image than just using a decision of YES or NO. Results of experiments on several forgeries demonstrate the effectiveness of the technique

    Managing heterogeneous cues in social contexts. A holistic approach for social interactions analysis

    Get PDF
    Une interaction sociale désigne toute action réciproque entre deux ou plusieurs individus, au cours de laquelle des informations sont partagées sans "médiation technologique". Cette interaction, importante dans la socialisation de l'individu et les compétences qu'il acquiert au cours de sa vie, constitue un objet d'étude pour différentes disciplines (sociologie, psychologie, médecine, etc.). Dans le contexte de tests et d'études observationnelles, de multiples mécanismes sont utilisés pour étudier ces interactions tels que les questionnaires, l'observation directe des événements et leur analyse par des opérateurs humains, ou l'observation et l'analyse à posteriori des événements enregistrés par des spécialistes (psychologues, sociologues, médecins, etc.). Cependant, de tels mécanismes sont coûteux en termes de temps de traitement, ils nécessitent un niveau élevé d'attention pour analyser simultanément plusieurs descripteurs, ils sont dépendants de l'opérateur (subjectivité de l'analyse) et ne peuvent viser qu'une facette de l'interaction. Pour faire face aux problèmes susmentionnés, il peut donc s'avérer utile d'automatiser le processus d'analyse de l'interaction sociale. Il s'agit donc de combler le fossé entre les processus d'analyse des interactions sociales basés sur l'homme et ceux basés sur la machine. Nous proposons donc une approche holistique qui intègre des signaux hétérogènes multimodaux et des informations contextuelles (données "exogènes" complémentaires) de manière dynamique et optionnelle en fonction de leur disponibilité ou non. Une telle approche permet l'analyse de plusieurs "signaux" en parallèle (où les humains ne peuvent se concentrer que sur un seul). Cette analyse peut être encore enrichie à partir de données liées au contexte de la scène (lieu, date, type de musique, description de l'événement, etc.) ou liées aux individus (nom, âge, sexe, données extraites de leurs réseaux sociaux, etc.) Les informations contextuelles enrichissent la modélisation des métadonnées extraites et leur donnent une dimension plus "sémantique". La gestion de cette hétérogénéité est une étape essentielle pour la mise en œuvre d'une approche holistique. L'automatisation de la capture et de l'observation " in vivo " sans scénarios prédéfinis lève des verrous liés à i) la protection de la vie privée et à la sécurité ; ii) l'hétérogénéité des données ; et iii) leur volume. Par conséquent, dans le cadre de l'approche holistique, nous proposons (1) un modèle de données complet préservant la vie privée qui garantit le découplage entre les méthodes d'extraction des métadonnées et d'analyse des interactions sociales ; (2) une méthode géométrique non intrusive de détection par contact visuel ; et (3) un modèle profond de classification des repas français pour extraire les informations du contenu vidéo. L'approche proposée gère des signaux hétérogènes provenant de différentes modalités en tant que sources multicouches (signaux visuels, signaux vocaux, informations contextuelles) à différentes échelles de temps et différentes combinaisons entre les couches (représentation des signaux sous forme de séries temporelles). L'approche a été conçue pour fonctionner sans dispositifs intrusifs, afin d'assurer la capture de comportements réels et de réaliser l'observation naturaliste. Nous avons déployé l'approche proposée sur la plateforme OVALIE qui vise à étudier les comportements alimentaires dans différents contextes de la vie réelle et qui est située à l'Université Toulouse-Jean Jaurès, en France.Social interaction refers to any interaction between two or more individuals, in which information sharing is carried out without any mediating technology. This interaction is a significant part of individual socialization and experience gaining throughout one's lifetime. It is interesting for different disciplines (sociology, psychology, medicine, etc.). In the context of testing and observational studies, multiple mechanisms are used to study these interactions such as questionnaires, direct observation and analysis of events by human operators, or a posteriori observation and analysis of recorded events by specialists (psychologists, sociologists, doctors, etc.). However, such mechanisms are expensive in terms of processing time. They require a high level of attention to analyzing several cues simultaneously. They are dependent on the operator (subjectivity of the analysis) and can only target one side of the interaction. In order to face the aforementioned issues, the need to automatize the social interaction analysis process is highlighted. So, it is a question of bridging the gap between human-based and machine-based social interaction analysis processes. Therefore, we propose a holistic approach that integrates multimodal heterogeneous cues and contextual information (complementary "exogenous" data) dynamically and optionally according to their availability or not. Such an approach allows the analysis of multi "signals" in parallel (where humans are able only to focus on one). This analysis can be further enriched from data related to the context of the scene (location, date, type of music, event description, etc.) or related to individuals (name, age, gender, data extracted from their social networks, etc.). The contextual information enriches the modeling of extracted metadata and gives them a more "semantic" dimension. Managing this heterogeneity is an essential step for implementing a holistic approach. The automation of " in vivo " capturing and observation using non-intrusive devices without predefined scenarios introduces various issues that are related to data (i) privacy and security; (ii) heterogeneity; and (iii) volume. Hence, within the holistic approach we propose (1) a privacy-preserving comprehensive data model that grants decoupling between metadata extraction and social interaction analysis methods; (2) geometric non-intrusive eye contact detection method; and (3) French food classification deep model to extract information from the video content. The proposed approach manages heterogeneous cues coming from different modalities as multi-layer sources (visual signals, voice signals, contextual information) at different time scales and different combinations between layers (representation of the cues like time series). The approach has been designed to operate without intrusive devices, in order to ensure the capture of real behaviors and achieve the naturalistic observation. We have deployed the proposed approach on OVALIE platform which aims to study eating behaviors in different real-life contexts and it is located in University Toulouse-Jean Jaurès, France

    Mitigation of H.264 and H.265 Video Compression for Reliable PRNU Estimation

    Full text link
    The photo-response non-uniformity (PRNU) is a distinctive image sensor characteristic, and an imaging device inadvertently introduces its sensor's PRNU into all media it captures. Therefore, the PRNU can be regarded as a camera fingerprint and used for source attribution. The imaging pipeline in a camera, however, involves various processing steps that are detrimental to PRNU estimation. In the context of photographic images, these challenges are successfully addressed and the method for estimating a sensor's PRNU pattern is well established. However, various additional challenges related to generation of videos remain largely untackled. With this perspective, this work introduces methods to mitigate disruptive effects of widely deployed H.264 and H.265 video compression standards on PRNU estimation. Our approach involves an intervention in the decoding process to eliminate a filtering procedure applied at the decoder to reduce blockiness. It also utilizes decoding parameters to develop a weighting scheme and adjust the contribution of video frames at the macroblock level to PRNU estimation process. Results obtained on videos captured by 28 cameras show that our approach increases the PRNU matching metric up to more than five times over the conventional estimation method tailored for photos
    corecore