131 research outputs found

    Discriminative Deep Feature Visualization for Explainable Face Recognition

    Full text link
    Despite the huge success of deep convolutional neural networks in face recognition (FR) tasks, current methods lack explainability for their predictions because of their "black-box" nature. In recent years, studies have been carried out to give an interpretation of the decision of a deep FR system. However, the affinity between the input facial image and the extracted deep features has not been explored. This paper contributes to the problem of explainable face recognition by first conceiving a face reconstruction-based explanation module, which reveals the correspondence between the deep feature and the facial regions. To further interpret the decision of an FR model, a novel visual saliency explanation algorithm has been proposed. It provides insightful explanation by producing visual saliency maps that represent similar and dissimilar regions between input faces. A detailed analysis has been presented for the generated visual explanation to show the effectiveness of the proposed method

    Face comparison in forensics:A deep dive into deep learning and likelihood rations

    Get PDF
    This thesis explores the transformative potential of deep learning techniques in the field of forensic face recognition. It aims to address the pivotal question of how deep learning can advance this traditionally manual field, focusing on three key areas: forensic face comparison, face image quality assessment, and likelihood ratio estimation. Using a comparative analysis of open-source automated systems and forensic experts, the study finds that automated systems excel in identifying non-matches in low-quality images, but lag behind experts in high-quality settings. The thesis also investigates the role of calibration methods in estimating likelihood ratios, revealing that quality score-based and feature-based calibrations are more effective than naive methods. To enhance face image quality assessment, a multi-task explainable quality network is proposed that not only gauges image quality, but also identifies contributing factors. Additionally, a novel images-to-video recognition method is introduced to improve the estimation of likelihood ratios in surveillance settings. The study employs multiple datasets and software systems for its evaluations, aiming for a comprehensive analysis that can serve as a cornerstone for future research in forensic face recognition

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice

    Get PDF
    Artificial intelligence (“AI”) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create AI models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI. A particularly pressing area of concern has been criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have often embraced claims that AI should remain undisclosed in court. Both champions and critics of AI, however, mistakenly assume that we inevitably face a trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this Article, we question the basis for this assumption, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, and it may reflect preexisting racial and socioeconomic disparities. Unless AI is interpretable, decisionmakers like lawyers and judges who must use it will not be able to detect those underlying errors, much less understand what the AI recommendation means. Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given the substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the presumption should be in favor of glass box AI, absent strong evidence to the contrary. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI

    Machines Like Me: A Proposal on the Admissibility of Artificially Intelligent Expert Testimony

    Get PDF
    With the rapidly expanding sophistication of artificial intelligence systems, their reliability, and cost-effectiveness for solving problems, the current trend of admitting testimony based on artificially intelligent (AI) systems is only likely to grow. In that context, it is imperative for us to ask what rules of evidence judges today should use relating to such evidence. To answer that question, we provide an in-depth review of expert systems, machine learning systems, and neural networks. Based on that analysis, we contend that evidence from only certain types of AI systems meet the requirements for admissibility, while other systems do not. The break in admissible/inadmissible AI evidence is a function of the opaqueness of the underlying computational methodology of the AI system and the court’s ability to assess that methodology. The admission of AI evidence also requires us to navigate pitfalls including the difficulty of explaining AI systems’ methodology and issues as to the right to confront witnesses. Based on our analysis, we offer several policy proposals that would address weaknesses or lack of clarity in the current system. First, in light of the long-standing concern that jurors would allow expertise to overcome their own assessment of the evidence and blindly agree with the “infallible” result of advanced-computing AI, we propose that jury instruction commissions, judicial panels, circuits, or other parties who draft instructions consider adopting a cautionary instruction for AI-based evidence. Such an instruction should remind jurors that the AI-based evidence is solely one part of the analysis, the opinions so generated are only as good as the underlying analytical methodology, and ultimately, the decision to accept or reject the evidence, in whole or in part, should remain with the jury alone. Second, as we have concluded that the admission of AI-based evidence depends largely on the computational methodology underlying the analysis, we propose for AI evidence to be admissible, the underlying methodology must be transparent because the judicial assessment of AI technology relies on the ability to understand how it functions

    Assessing AI output in legal decision-making with nearest neighbors

    Get PDF
    Artificial intelligence (“AI”) systems are widely used to assist or automate decision-making. Although there are general metrics for the performance of AI systems, there is, as yet, no well-established gauge to assess the quality of particular AI recommendations or decisions. This presents a serious problem in the emerging use of AI in legal applications because the legal system aims for good performance not only in the aggregate but also in individual cases. This Article presents the concept of using nearest neighbors to assess individual AI output. This nearest neighbor analysis has the benefit of being easy to understand and apply for judges, lawyers, and juries. In addition, it is fundamentally compatible with existing AI methodologies. This Article explains how the concept could be applied for probing AI output in a number of use cases, including civil discovery, risk prediction, and forensic comparison, while also presenting its limitations

    An Attention-Guided Framework for Explainable Biometric Presentation Attack Detection

    Get PDF
    Despite the high performances achieved using deep learning techniques in biometric systems, the inability to rationalise the decisions reached by such approaches is a significant drawback for the usability and security requirements of many applications. For Facial Biometric Presentation Attack Detection (PAD), deep learning approaches can provide good classification results but cannot answer the questions such as “Why did the system make this decision”? To overcome this limitation, an explainable deep neural architecture for Facial Biometric Presentation Attack Detection is introduced in this paper. Both visual and verbal explanations are produced using the saliency maps from a Grad-CAM approach and the gradient from a Long-Short-Term-Memory (LSTM) network with a modified gate function. These explanations have also been used in the proposed framework as additional information to further improve the classification performance. The proposed framework utilises both spatial and temporal information to help the model focus on anomalous visual characteristics that indicate spoofing attacks. The performance of the proposed approach is evaluated using the CASIA-FA, Replay Attack, MSU-MFSD, and HKBU MARs datasets and indicates the effectiveness of the proposed method for improving performance and producing usable explanations
    corecore