26 research outputs found

    A Framework to Detect Presentation Attacks

    Get PDF
    Biometric-based authentication systems are becoming the preferred choice to replace password-based authentication systems. Among several variations of biometrics (e.g., face, eye, fingerprint), iris-based authentication is commonly used in every day applications. In iris-based authentication systems, iris images from legitimate users are captured and certain features are extracted to be used for matching during the authentication process. Literature works suggest that iris-based authentication systems can be subject to presentation attacks where an attacker obtains printed copy of the victim’s eye image and displays it in front of an authentication system to gain unauthorized access. Such attacks can be performed by displaying static eye images on mobile devices or iPad (known as screen attacks). As iris features are not changed, once an iris feature is compromised, it is hard to avoid this type of attack. Existing approaches relying on static features of the iris are not suitable to prevent presentation attacks. Feature from live Iris (or liveness detection) is a promising approach. Further, additional layer of security from iris feature can enable hardening the security of authentication system that existing works do not address. To address these limitations, this thesis proposed iris signature generation based on the area between the pupil and the cornea . Our approach relies on capturing iris images using near infrared light. We train two classifiers to capture the area between the pupil and the cornea. The image of iris is then stored in the database. This approach generates a QR code from the iris. The code acts as a password (additional layer of security) and a user is iii required to provide it during authentication. The approach has been tested using samples obtained from publicly available iris database. The initial results show that the proposed approach has lower false positive and false negative rates

    Establishing the digital chain of evidence in biometric systems

    Get PDF
    Traditionally, a chain of evidence or chain of custody refers to the chronological documentation, or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic. Whether in the criminal justice system, military applications, or natural disasters, ensuring the accuracy and integrity of such chains is of paramount importance. Intentional or unintentional alteration, tampering, or fabrication of digital evidence can lead to undesirable effects. We find despite the consequences at stake, historically, no unique protocol or standardized procedure exists for establishing such chains. Current practices rely on traditional paper trails and handwritten signatures as the foundation of chains of evidence.;Copying, fabricating or deleting electronic data is easier than ever and establishing equivalent digital chains of evidence has become both necessary and desirable. We propose to consider a chain of digital evidence as a multi-component validation problem. It ensures the security of access control, confidentiality, integrity, and non-repudiation of origin. Our framework, includes techniques from cryptography, keystroke analysis, digital watermarking, and hardware source identification. The work offers contributions to many of the fields used in the formation of the framework. Related to biometric watermarking, we provide a means for watermarking iris images without significantly impacting biometric performance. Specific to hardware fingerprinting, we establish the ability to verify the source of an image captured by biometric sensing devices such as fingerprint sensors and iris cameras. Related to keystroke dynamics, we establish that user stimulus familiarity is a driver of classification performance. Finally, example applications of the framework are demonstrated with data collected in crime scene investigations, people screening activities at port of entries, naval maritime interdiction operations, and mass fatality incident disaster responses

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Coherence of PRNU weighted estimations for improved source camera identification

    Get PDF
    This paper presents a method for Photo Response Non Uniformity (PRNU) pattern noise based camera identification. It takes advantage of the coherence between different PRNU estimations restricted to specific image regions. The main idea is based on the following observations: different methods can be used for estimating PRNU contribution in a given image; the estimation has not the same accuracy in the whole image as a more faithful estimation is expected from flat regions. Hence, two different estimations of the reference PRNU have been considered in the classification procedure, and the coherence of the similarity metric between them, when evaluated in three different image regions, is used as classification feature. More coherence is expected in case of matching, i.e. the image has been acquired by the analysed device, than in the opposite case, where similarity metric is almost noisy and then unpredictable. Presented results show that the proposed approach provides comparable and often better classification results of some state of the art methods, showing to be robust to lack of flat field (FF) images availability, devices of the same brand or model, uploading/downloading from social networks

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Temporal Image Forensics for Picture Dating based on Machine Learning

    Get PDF
    Temporal image forensics involves the investigation of multi-media digital forensic material related to crime with the goal of obtaining accurate evidence concerning activity and timing to be presented in a court of law. Because of the ever-increasing complexity of crime in the digital age, forensic investigations are increasingly dependent on timing information. The simplest way to extract such forensic information would be the use of the EXIF header of picture files as it contains most of the information. However, these header data can be easily removed or manipulated and hence cannot be evidential, and so estimating the acquisition time of digital photographs has become more challenging. This PhD research proposes to use image contents instead of file headers to solve this problem. In this thesis, a number of contributions are presented in the area of temporal image forensics to predict picture dating. Firstly, the present research introduces the unique Northumbria Temporal Image Forensics (NTIF) database of pictures for the purpose of temporal image forensic purposes. As digital sensors age, the changes in Photo Response Non-Uniformity (PRNU) over time have been highlighted using the NTIF database, and it is concluded that PRNU cannot be useful feature for picture dating application. Apart from the PRNU, defective pixels also constitute another sensor imperfection of forensic relevance. Secondly, this thesis highlights the fact that the filter-based PRNU technique is useful for source camera identification application as compared to deep convolutional neural networks when limited amounts of images under investigation are available to the forensic analyst. The results concluded that due to sensor pattern noise feature which is location-sensitive, the performance of CNN-based approach declines because sensor pattern noise image blocks are fed at different locations into CNN for the same category. Thirdly, the deep learning technique is applied for picture dating, which has shown promising results with performance levels up to 80% to 88% depending on the digital camera used. The key findings indicate that a deep learning approach can successfully learn the temporal changes in image contents, rather than the sensor pattern noise. Finally, this thesis proposes a technique to estimate the acquisition time slots of digital pictures using a set of candidate defective pixel locations in non-overlapping image blocks. The temporal behaviour of camera sensor defects in digital pictures are analyzed using a machine learning technique in which potential candidate defective pixels are determined according to the related pixel neighbourhood and two proposed features called local variation features. The idea of virtual timescales using halves of real time slots and a combination of prediction scores for image blocks has been proposed to enhance performance. When assessed using the NTIF image dataset, the proposed system has been shown to achieve very promising results with an estimated accuracy of the acquisition times of digital pictures between 88% and 93%, exhibiting clear superiority over relevant state-of-the-art systems

    Information Forensics and Security: A quarter-century-long journey

    Get PDF
    Information forensics and security (IFS) is an active R&D area whose goal is to ensure that people use devices, data, and intellectual properties for authorized purposes and to facilitate the gathering of solid evidence to hold perpetrators accountable. For over a quarter century, since the 1990s, the IFS research area has grown tremendously to address the societal needs of the digital information era. The IEEE Signal Processing Society (SPS) has emerged as an important hub and leader in this area, and this article celebrates some landmark technical contributions. In particular, we highlight the major technological advances by the research community in some selected focus areas in the field during the past 25 years and present future trends
    corecore