18 research outputs found
A New Forensic Video Database for Source Smartphone Identification: Description and Analysis
In recent years, the field of digital imaging has made significant progress, so that today every smartphone has a built-in video camera that allows you to record high-quality video for free and without restrictions. On the other hand, rapidly growing internet technology has contributed significantly to the widespread use of digital video via web-based multimedia systems and mobile smartphone applications such as YouTube, Facebook, Twitter, WhatsApp, etc. However, as the recording and distribution of digital videos have become affordable nowadays, security issues have become threatening and spread worldwide. One of the security issues is identifying source cameras on videos. There are some new challenges that should be addressed in this area. One of the new challenges is individual source camera identification (ISCI), which focuses on identifying each device regardless of its model. The first step towards solving the problems is a popular video database recorded by modern smartphone devices, which can also be used for deep learning methods that are growing rapidly in the field of source camera identification. In this paper, a smartphone video database named Qatar University Forensic Video Database (QUFVD) is introduced. The QUFVD includes 6000 videos from 20 modern smartphone representing five brands, each brand has two models, and each model has two identical smartphone devices. This database is suitable for evaluating different techniques such as deep learning methods for video source smartphone identification and verification. To evaluate the QUFVD, a series of experiments to identify source cameras using a deep learning technique are conducted. The results show that improvements are essential for the ISCI scenario on video
Image Source Identification Using Convolutional Neural Networks in IoT Environment
Digital image forensics is a key branch of digital forensics that based on forensic analysis of image authenticity and image content. The advances in new techniques, such as smart devices, Internet of Things (IoT), artificial images, and social networks, make forensic image analysis play an increasing role in a wide range of criminal case investigation. This work focuses on image source identification by analysing both the fingerprints of digital devices and images in IoT environment. A new convolutional neural network (CNN) method is proposed to identify the source devices that token an image in social IoT environment. The experimental results show that the proposed method can effectively identify the source devices with high accuracy
PRNU-Net: a Deep Learning Approach for Source Camera Model Identification based on Videos Taken with Smartphone
Recent advances in digital imaging have meant that every smartphone has a video camera that can record highquality video for free and without restrictions. In addition, rapidly developing Internet technology has contributed significantly to the widespread distribution of digital video via web-based multimedia systems and mobile applications such as YouTube, Facebook, Twitter, WhatsApp, etc. However, as the recording and distribution of digital video has become affordable nowadays, security issues have become threatening and have spread worldwide. One of the security issues is the identification of source cameras on videos. Generally, two common categories of methods are used in this area, namely Photo Response Non-Uniformity (PRNU) and Machine Learning approaches. To exploit the power of both approaches, this work adds a new PRNU-based layer to a convolutional neural network (CNN) called PRNU-Net. To explore the new layer, the main structure of the CNN is based on the MISLnet, which has been used in several studies to identify the source camera. The experimental results show that the PRNU-Net is more successful than the MISLnet and that the PRNU extracted by the layer from low features, namely edges or textures, is more useful than high and mid-level features, namely parts and objects, in classifying source camera models. On average, the network improves theresults in a new database by about 4
Characterizing the polycentric spatial structure of Beijing Metropolitan Region using carpooling big data
Polycentric metropolitan regions are a high-level urbanization form characterized with dynamic layout, fuzzy boundary and various human activity performances. Owing to the complexity of polycentricity, it can be difficult to understand their spatial structure characteristics merely based on conventional survey data and method. This poses a challenge for authorities wishing to make effective urban land use and transport policies. Fortunately, the presence and availability of big data provides an opportunity for scholars to explore the complex metropolitan spatial structures, but there are still some research limitations in terms of data use and processing, unit scale, and method. To address these limitations, we proposed a three-step method to apply the carpooling big data in metropolitan analysis including: first, locating the metropolitan sub-centers; second, delimiting the metropolitan sphere; third, measuring the performance of polycentric structure. The developed method was tested in Beijing Metropolitan Region and the results show that the polycentric metropolitan region represents a hierarchical regional center system: one primary center interacting with seven surrounding secondary centers. These metropolitan centers have a strong attraction, which results in the continuous expansion beyond the administrative boundary to radiate more adjacent jurisdictions. Furthermore, the heterogeneity of human activity performance and role for each regional center is remarkable. It is necessary to consider the specific role of each sub-center when making metropolitan transport and land use policies. Compared with previous studies, the proposed method has the advantages of being more reliable, accurate and comprehensive in characterizing the polycentric spatial structure. The application of carpooling big data and the proposed method would provide a novel perspective for research on the other metropolitan regions
Recommended from our members
Incorporation of micro-level analysis in strategic urban transport modelling: with a case study of the Greater Beijing
Many developing countries and regions are suffering from severe urban transport problems arising from accidents, congestion, air pollution, rising carbon intensity, and chronic under-funding of infrastructure and services. The problems make those cities the most polluted and often the least liveable. Strategic transport modelling has been recognised as an effective approach for developing and testing policy options, especially where it is integrated with land use planning and urban design. However, in most developing-country cities strategic transport modelling has been out of reach for practical policy use because of its sophisticated data and skill requirements, which currently imply unaffordable high costs and long durations for model development. This means that strategic urban transport modelling is the least available where it is needed most urgently. Meanwhile, the spread of smart data in mapping and urban activity monitoring has often been just as rapid in developing countries as in the developed. This has triggered new approaches in micro-level analyses of transport networks, personal movements and vehicles. In the most advanced cases, the new analyses have started to influence strategic modelling.
The main hypothesis of this dissertation is that an incorporation of the micro-level smart data and analyses in strategic urban transport modelling will make it feasible to establish a sufficiently robust strategic transport model for evidence-based policy analysis with cost, time and skill thresholds that are close to being affordable in developing country cities. In order to test this main hypothesis, a number of novel model development tasks have been carried out which contribute to the field of applied urban modelling. This new approach aims to contribute to the transformation of the prevailing modus operandi where model development could not start in earnest until extensive data collection and skills training have been completed to a situation where a sufficiently robust model can be established cheaply and quickly to support on-going and incremental refinements.
More specifically, new modelling tools have been developed as part of this dissertation using sparse GPS taxi traces to identify slow-moving and stopping traffic hotspots using an extended density-based spatial clustering algorithm that is tolerant of significant data noise, and to estimate congested road speeds (which used to be very costly and time-consuming to obtain if at all). The micro-level network, congested speeds and insights into the nature of the congested traffic have been incorporated into a MEPLAN-based strategic transport model interacting with a MEPLAN-based land use and travel demand model. This means that the strategic economic, social and environmental impacts of transport interventions can be tested in a robust way through accounting for the interactions among transport, land-use and background social-technical trends. A new approach to establish the medium to long term visions for alternative travel demand management and transport investment scenarios has been tested using this model.
The methods and algorithms have been tested in a case study of the Greater Beijing region, which consists of the municipalities of Beijing and Tianjin together with the surrounding areas in the province of Hebei. The government’s data regulations of restricting overseas studies to using only publicly available data sources have made the case study ideal for testing the new approach. The potential of the new strategic urban transport model has been tested through a wide range of policy scenarios. The results suggest that the new approach developed in this dissertation has made it not only cheaper and faster to develop a robust model, but could also potentially fill a gap in the lack of medium to long term perspectives regarding major road and metro investments over the next two decades. Such analyses could be of critical importance in improving the performance of the transport system in terms of safety, economic efficiency, air quality and carbon reduction given the long lead times to plan and deliver transport infrastructure investments
Temporal Image Forensics for Picture Dating based on Machine Learning
Temporal image forensics involves the investigation of multi-media digital forensic material related to crime with the goal of obtaining accurate evidence concerning activity and timing to be presented in a court of law. Because of the ever-increasing complexity of crime in the digital age, forensic investigations are increasingly dependent on timing information. The simplest way to extract such forensic information would be the use of the EXIF header of picture files as it contains most of the information. However, these header data can be easily removed or manipulated and hence cannot be evidential, and so estimating the acquisition time of digital photographs has become more challenging.
This PhD research proposes to use image contents instead of file headers to solve this problem. In this thesis, a number of contributions are presented in the area of temporal image forensics to predict picture dating. Firstly, the present research introduces the unique Northumbria Temporal Image Forensics (NTIF) database of pictures for the purpose of temporal image forensic purposes. As digital sensors age, the changes in Photo Response Non-Uniformity (PRNU) over time have been highlighted using the NTIF database, and it is concluded that PRNU cannot be useful feature for picture dating application. Apart from the PRNU, defective pixels also constitute another sensor imperfection of forensic relevance. Secondly, this thesis highlights the fact that the filter-based PRNU technique is useful for source camera identification application as compared to deep convolutional neural networks when limited amounts of images under investigation are available to the forensic analyst. The results concluded that due to sensor pattern noise feature which is location-sensitive, the performance of CNN-based approach declines because sensor pattern noise image blocks are fed at different locations into CNN for the same category. Thirdly, the deep learning technique is applied for picture dating, which has shown promising results with performance levels up to 80% to 88% depending on the digital camera used. The key findings indicate that a deep learning approach can successfully learn the temporal changes in image contents, rather than the sensor pattern noise.
Finally, this thesis proposes a technique to estimate the acquisition time slots of digital pictures using a set of candidate defective pixel locations in non-overlapping image blocks. The temporal behaviour of camera sensor defects in digital pictures are analyzed using a machine learning technique in which potential candidate defective pixels are determined according to the related pixel neighbourhood and two proposed features called local variation features. The idea of virtual timescales using halves of real time slots and a combination of prediction scores for image blocks has been proposed to enhance performance. When assessed using the NTIF image dataset, the proposed system has been shown to achieve very promising results with an estimated accuracy of the acquisition times of digital pictures between 88% and 93%, exhibiting clear superiority over relevant state-of-the-art systems
Photo response non-uniformity based image forensics in the presence of challenging factors
With the ever-increasing prevalence of digital imaging devices and the rapid development of networks, the sharing of digital images becomes ubiquitous in our daily life. However, the pervasiveness of powerful image-editing tools also makes the digital images an easy target for malicious manipulations. Thus, to prevent people from falling victims to fake information and trace the criminal activities, digital image forensics methods like source camera identification, source oriented image clustering and image forgery detections have been developed.
Photo response non-uniformity (PRNU), which is an intrinsic sensor noise arises due to the pixels non-uniform response to the incident, has been used as a powerful tool for image device fingerprinting. The forensic community has developed a vast number of PRNU-based methods in different fields of digital image forensics. However, with the technology advancement in digital photography, the emergence of photo-sharing social networking sites, as well as the anti-forensics attacks targeting the PRNU, it brings new challenges to PRNU-based image forensics. For example, the performance of the existing forensic methods may deteriorate due to different camera exposure parameter settings and the efficacy of the PRNU-based methods can be directly challenged by image editing tools from social network sites or anti-forensics attacks. The objective of this thesis is to investigate and design effective methods to mitigate some of these challenges on PRNU-based image forensics.
We found that the camera exposure parameter settings, especially the camera sensitivity, which is commonly known by the name of the ISO speed, can influence the PRNU-based image forgery detection. Hence, we first construct the Warwick Image Forensics Dataset, which contains images taken with diverse exposure parameter settings to facilitate further studies. To address the impact from ISO speed on PRNU-based image forgery detection, an ISO speed-specific correlation prediction process is proposed with a content-based ISO speed inference method to facilitate the process even if the ISO speed information is not available. We also propose a three-step framework to allow the PRNUbased source oriented clustering methods to perform successfully on Instagram images, despite some built-in image filters from Instagram may significantly distort PRNU. Additionally, for the binary classification of detecting whether an image's PRNU is attacked or not, we propose a generative adversarial network-based training strategy for a neural network-based classifier, which makes the classifier generalize better for images subject to unprecedented attacks.
The proposed methods are evaluated on public benchmarking datasets and our Warwick Image Forensics Dataset, which is released to the public as well. The experimental results validate the effectiveness of the methods proposed in this thesis
Progettazione di una piattaforma per l'analisi dei dati di traiettoria: un caso di studi sui dati di navigazione
Negli ultimi anni si è assistito ad una crescente disponibilità di dati di traiettoria, ovvero dati relativi agli spostamenti di oggetti di diverso tipo, provenienti da innumerevoli fonti quali smartphone e dispositivi di sensoristica. Questo processo è stato sicuramente favorito dalla continua crescita, sia in termine tecnologico, sia di utilizzo, di tali dispositivi. Ciò ha portato da un lato ad una sempre maggiore precisione ed accuratezza dei dati e, dall'altro, ad un aumento consistente delle quantità degli stessi. A partire da questi presupposti l'interesse si è indirizzato verso la possibilità di ricavare conoscenza dai dati grezzi, implementando uno stack di algoritmi tali da estrarre informazioni significative.
Le applicazioni sono molteplici, vista la varietà dei dati in gioco, è possibile studiare i comportamenti: delle persone, dei veicoli di trasporto, degli animali e dei fenomeni naturali. Di conseguenza l'interpretazione dei dati varia in base al dominio applicativo e alla specifica conoscenza che si vuole estrarre.
In questo lavoro di tesi si è progettato e sviluppato un insieme di algoritmi, di per se applicabili a qualsiasi contesto, specializzati nel dominio della navigazione commerciale e in particolare su un caso di studio dei dati delle imbarcazioni che hanno navigato negli Stati Uniti nei primi tre mesi del 2014. I dati, arricchiti con dati open e processati attraverso varie elaborazioni, sono mostrati graficamente nella piattaforma web sviluppata.
I risultati ottenuti hanno mostrato in primo luogo che è possibile generalizzare gli algoritmi e la piattaforma sviluppata nei vari domini applicativi possibili, avendo cura di regolare i parametri in base al contesto a cui si vogliono applicare. Oltre a questo, si è cercato di essere più indipendenti possibile dalla piattaforma utilizzata per memorizzare i dati rendendo la migrazione verso altre piattaforme, sia di tipo relazionale, sia big data, pressoché immediata