18 research outputs found

    A New Forensic Video Database for Source Smartphone Identification: Description and Analysis

    Get PDF
    In recent years, the field of digital imaging has made significant progress, so that today every smartphone has a built-in video camera that allows you to record high-quality video for free and without restrictions. On the other hand, rapidly growing internet technology has contributed significantly to the widespread use of digital video via web-based multimedia systems and mobile smartphone applications such as YouTube, Facebook, Twitter, WhatsApp, etc. However, as the recording and distribution of digital videos have become affordable nowadays, security issues have become threatening and spread worldwide. One of the security issues is identifying source cameras on videos. There are some new challenges that should be addressed in this area. One of the new challenges is individual source camera identification (ISCI), which focuses on identifying each device regardless of its model. The first step towards solving the problems is a popular video database recorded by modern smartphone devices, which can also be used for deep learning methods that are growing rapidly in the field of source camera identification. In this paper, a smartphone video database named Qatar University Forensic Video Database (QUFVD) is introduced. The QUFVD includes 6000 videos from 20 modern smartphone representing five brands, each brand has two models, and each model has two identical smartphone devices. This database is suitable for evaluating different techniques such as deep learning methods for video source smartphone identification and verification. To evaluate the QUFVD, a series of experiments to identify source cameras using a deep learning technique are conducted. The results show that improvements are essential for the ISCI scenario on video

    Image Source Identification Using Convolutional Neural Networks in IoT Environment

    Get PDF
    Digital image forensics is a key branch of digital forensics that based on forensic analysis of image authenticity and image content. The advances in new techniques, such as smart devices, Internet of Things (IoT), artificial images, and social networks, make forensic image analysis play an increasing role in a wide range of criminal case investigation. This work focuses on image source identification by analysing both the fingerprints of digital devices and images in IoT environment. A new convolutional neural network (CNN) method is proposed to identify the source devices that token an image in social IoT environment. The experimental results show that the proposed method can effectively identify the source devices with high accuracy

    PRNU-Net: a Deep Learning Approach for Source Camera Model Identification based on Videos Taken with Smartphone

    Get PDF
    Recent advances in digital imaging have meant that every smartphone has a video camera that can record highquality video for free and without restrictions. In addition, rapidly developing Internet technology has contributed significantly to the widespread distribution of digital video via web-based multimedia systems and mobile applications such as YouTube, Facebook, Twitter, WhatsApp, etc. However, as the recording and distribution of digital video has become affordable nowadays, security issues have become threatening and have spread worldwide. One of the security issues is the identification of source cameras on videos. Generally, two common categories of methods are used in this area, namely Photo Response Non-Uniformity (PRNU) and Machine Learning approaches. To exploit the power of both approaches, this work adds a new PRNU-based layer to a convolutional neural network (CNN) called PRNU-Net. To explore the new layer, the main structure of the CNN is based on the MISLnet, which has been used in several studies to identify the source camera. The experimental results show that the PRNU-Net is more successful than the MISLnet and that the PRNU extracted by the layer from low features, namely edges or textures, is more useful than high and mid-level features, namely parts and objects, in classifying source camera models. On average, the network improves theresults in a new database by about 4

    Characterizing the polycentric spatial structure of Beijing Metropolitan Region using carpooling big data

    Get PDF
    Polycentric metropolitan regions are a high-level urbanization form characterized with dynamic layout, fuzzy boundary and various human activity performances. Owing to the complexity of polycentricity, it can be difficult to understand their spatial structure characteristics merely based on conventional survey data and method. This poses a challenge for authorities wishing to make effective urban land use and transport policies. Fortunately, the presence and availability of big data provides an opportunity for scholars to explore the complex metropolitan spatial structures, but there are still some research limitations in terms of data use and processing, unit scale, and method. To address these limitations, we proposed a three-step method to apply the carpooling big data in metropolitan analysis including: first, locating the metropolitan sub-centers; second, delimiting the metropolitan sphere; third, measuring the performance of polycentric structure. The developed method was tested in Beijing Metropolitan Region and the results show that the polycentric metropolitan region represents a hierarchical regional center system: one primary center interacting with seven surrounding secondary centers. These metropolitan centers have a strong attraction, which results in the continuous expansion beyond the administrative boundary to radiate more adjacent jurisdictions. Furthermore, the heterogeneity of human activity performance and role for each regional center is remarkable. It is necessary to consider the specific role of each sub-center when making metropolitan transport and land use policies. Compared with previous studies, the proposed method has the advantages of being more reliable, accurate and comprehensive in characterizing the polycentric spatial structure. The application of carpooling big data and the proposed method would provide a novel perspective for research on the other metropolitan regions

    Temporal Image Forensics for Picture Dating based on Machine Learning

    Get PDF
    Temporal image forensics involves the investigation of multi-media digital forensic material related to crime with the goal of obtaining accurate evidence concerning activity and timing to be presented in a court of law. Because of the ever-increasing complexity of crime in the digital age, forensic investigations are increasingly dependent on timing information. The simplest way to extract such forensic information would be the use of the EXIF header of picture files as it contains most of the information. However, these header data can be easily removed or manipulated and hence cannot be evidential, and so estimating the acquisition time of digital photographs has become more challenging. This PhD research proposes to use image contents instead of file headers to solve this problem. In this thesis, a number of contributions are presented in the area of temporal image forensics to predict picture dating. Firstly, the present research introduces the unique Northumbria Temporal Image Forensics (NTIF) database of pictures for the purpose of temporal image forensic purposes. As digital sensors age, the changes in Photo Response Non-Uniformity (PRNU) over time have been highlighted using the NTIF database, and it is concluded that PRNU cannot be useful feature for picture dating application. Apart from the PRNU, defective pixels also constitute another sensor imperfection of forensic relevance. Secondly, this thesis highlights the fact that the filter-based PRNU technique is useful for source camera identification application as compared to deep convolutional neural networks when limited amounts of images under investigation are available to the forensic analyst. The results concluded that due to sensor pattern noise feature which is location-sensitive, the performance of CNN-based approach declines because sensor pattern noise image blocks are fed at different locations into CNN for the same category. Thirdly, the deep learning technique is applied for picture dating, which has shown promising results with performance levels up to 80% to 88% depending on the digital camera used. The key findings indicate that a deep learning approach can successfully learn the temporal changes in image contents, rather than the sensor pattern noise. Finally, this thesis proposes a technique to estimate the acquisition time slots of digital pictures using a set of candidate defective pixel locations in non-overlapping image blocks. The temporal behaviour of camera sensor defects in digital pictures are analyzed using a machine learning technique in which potential candidate defective pixels are determined according to the related pixel neighbourhood and two proposed features called local variation features. The idea of virtual timescales using halves of real time slots and a combination of prediction scores for image blocks has been proposed to enhance performance. When assessed using the NTIF image dataset, the proposed system has been shown to achieve very promising results with an estimated accuracy of the acquisition times of digital pictures between 88% and 93%, exhibiting clear superiority over relevant state-of-the-art systems

    Photo response non-uniformity based image forensics in the presence of challenging factors

    Get PDF
    With the ever-increasing prevalence of digital imaging devices and the rapid development of networks, the sharing of digital images becomes ubiquitous in our daily life. However, the pervasiveness of powerful image-editing tools also makes the digital images an easy target for malicious manipulations. Thus, to prevent people from falling victims to fake information and trace the criminal activities, digital image forensics methods like source camera identification, source oriented image clustering and image forgery detections have been developed. Photo response non-uniformity (PRNU), which is an intrinsic sensor noise arises due to the pixels non-uniform response to the incident, has been used as a powerful tool for image device fingerprinting. The forensic community has developed a vast number of PRNU-based methods in different fields of digital image forensics. However, with the technology advancement in digital photography, the emergence of photo-sharing social networking sites, as well as the anti-forensics attacks targeting the PRNU, it brings new challenges to PRNU-based image forensics. For example, the performance of the existing forensic methods may deteriorate due to different camera exposure parameter settings and the efficacy of the PRNU-based methods can be directly challenged by image editing tools from social network sites or anti-forensics attacks. The objective of this thesis is to investigate and design effective methods to mitigate some of these challenges on PRNU-based image forensics. We found that the camera exposure parameter settings, especially the camera sensitivity, which is commonly known by the name of the ISO speed, can influence the PRNU-based image forgery detection. Hence, we first construct the Warwick Image Forensics Dataset, which contains images taken with diverse exposure parameter settings to facilitate further studies. To address the impact from ISO speed on PRNU-based image forgery detection, an ISO speed-specific correlation prediction process is proposed with a content-based ISO speed inference method to facilitate the process even if the ISO speed information is not available. We also propose a three-step framework to allow the PRNUbased source oriented clustering methods to perform successfully on Instagram images, despite some built-in image filters from Instagram may significantly distort PRNU. Additionally, for the binary classification of detecting whether an image's PRNU is attacked or not, we propose a generative adversarial network-based training strategy for a neural network-based classifier, which makes the classifier generalize better for images subject to unprecedented attacks. The proposed methods are evaluated on public benchmarking datasets and our Warwick Image Forensics Dataset, which is released to the public as well. The experimental results validate the effectiveness of the methods proposed in this thesis

    Progettazione di una piattaforma per l'analisi dei dati di traiettoria: un caso di studi sui dati di navigazione

    Get PDF
    Negli ultimi anni si è assistito ad una crescente disponibilità di dati di traiettoria, ovvero dati relativi agli spostamenti di oggetti di diverso tipo, provenienti da innumerevoli fonti quali smartphone e dispositivi di sensoristica. Questo processo è stato sicuramente favorito dalla continua crescita, sia in termine tecnologico, sia di utilizzo, di tali dispositivi. Ciò ha portato da un lato ad una sempre maggiore precisione ed accuratezza dei dati e, dall'altro, ad un aumento consistente delle quantità degli stessi. A partire da questi presupposti l'interesse si è indirizzato verso la possibilità di ricavare conoscenza dai dati grezzi, implementando uno stack di algoritmi tali da estrarre informazioni significative. Le applicazioni sono molteplici, vista la varietà dei dati in gioco, è possibile studiare i comportamenti: delle persone, dei veicoli di trasporto, degli animali e dei fenomeni naturali. Di conseguenza l'interpretazione dei dati varia in base al dominio applicativo e alla specifica conoscenza che si vuole estrarre. In questo lavoro di tesi si è progettato e sviluppato un insieme di algoritmi, di per se applicabili a qualsiasi contesto, specializzati nel dominio della navigazione commerciale e in particolare su un caso di studio dei dati delle imbarcazioni che hanno navigato negli Stati Uniti nei primi tre mesi del 2014. I dati, arricchiti con dati open e processati attraverso varie elaborazioni, sono mostrati graficamente nella piattaforma web sviluppata. I risultati ottenuti hanno mostrato in primo luogo che è possibile generalizzare gli algoritmi e la piattaforma sviluppata nei vari domini applicativi possibili, avendo cura di regolare i parametri in base al contesto a cui si vogliono applicare. Oltre a questo, si è cercato di essere più indipendenti possibile dalla piattaforma utilizzata per memorizzare i dati rendendo la migrazione verso altre piattaforme, sia di tipo relazionale, sia big data, pressoché immediata
    corecore