9 research outputs found

    The quest for interpretable and responsible artificial intelligence

    Get PDF
    Artificial Intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in computational biology, finance, law and robotics. However, such a highly positive impact is coupled with significant challenges: How do we understand the decisions suggested by these systems in order that we can trust them? How can they be held accountable for those decisions? In this short survey, we cover some of the motivations and trends in the area that attempt to address such questions.Comment: This is a slightly edited version of an article to appear in The Biochemist, Portland Press, October 201

    How Artificial Intelligence Can Protect Financial Institutions From Malware Attacks

    Get PDF
    The objective of this study is to examine the potential of artificial intelligence (AI) to enhance the security posture of financial institutions against malware attacks. The study identifies the current trends of malware attacks in the banking sector, assesses the various forms of malware and their impact on financial institutions, and analyzes the relevant security features of AI. The findings suggest that financial institutions must implement robust cybersecurity measures to protect against various forms of malware attacks, including ransomware attacks, phishing attacks, mobile malware attacks, APTs, and insider threats. The study recommends that financial institutions invest in AI-based security systems to improve security features and automate security tasks. To ensure the reliability and security of AI systems, it is essential to incorporate relevant security features such as explain ability, privacy, anomaly detection, intrusion detection, and data validation. The study highlights the importance of incorporating explainable AI (XAI) to enable users to understand the reasoning behind the AI's decisions and actions, identify potential security threats and vulnerabilities in the AI system, and ensure that the system operates ethically and transparently. The study also recommends incorporating privacy-enhancing technologies (PETs) into AI systems to protect user data from unauthorized access and use. Finally, the study recommends incorporating robust security measures such as anomaly detection and intrusion detection to protect against adversarial attacks and data validation and integrity checks to protect against data poisoning attacks. Overall, this study provides insights for decision-makers in implementing effective cybersecurity strategies to protect financial institutions from malware attacks

    Who needs XAI in the Energy Sector? A Framework to Upgrade Black Box Explainability

    Get PDF
    Artificial Intelligence (AI)-based methods in the energy sector challenge companies, organizations, and societies. Organizational issues include traceability, certifiability, explainability, responsibility, and efficiency. Societal challenges include ethical norms, bias, discrimination, privacy, and information security. Explainable Artificial Intelligence (XAI) can address these issues in various application areas of the energy sector, e.g., power generation forecasting, load management, and network security operations. We derive Key Topics (KTs) and Design Requirements (DRs) and develop Design Principles (DPs) for efficient XAI applications through Design Science Research (DSR). We analyze 179 scientific articles to identify our 8 KTs for XAI implementation through text mining and topic modeling. Based on the KTs, we derive 15 DRs and develop 18 DPs. After that, we discuss and evaluate our results and findings through expert surveys. We develop a Three-Forces Model as a framework for implementing efficient XAI solutions. We provide recommendations and a further research agenda

    Contributions to energy informatics, data protection, AI-driven cybersecurity, and explainable AI

    Get PDF
    This cumulative dissertation includes eleven papers dealing with energy informatics, privacy, artificial intelligence-enabled cybersecurity, explainable artificial intelligence, ethical artificial intelligence, and decision support. In addressing real-world challenges, the dissertation provides practical guidance, reduces complexity, shows insights from empirical data, and supports decision-making. Interdisciplinary research methods include morphological analysis, taxonomies, decision trees, and literature reviews. From the resulting design artifacts, such as design principles, critical success factors, taxonomies, archetypes, and decision trees ¬ practitioners, including energy utilities, data-intensive artificial intelligence service providers, cybersecurity consultants, managers, policymakers, regulators, decision-makers, and end users can benefit. These resources enable them to make informed and efficient decisions

    Assurance of Machine Learning-Based Aerospace Systems: Towards an Overarching Properties-Driven Approach

    Get PDF
    692M15-22-T-00012Traditional process-based approaches of certifying aerospace digital systems are not sufficient to address the challenges associated with using Artificial Intelligence (AI) or Machine Learning (ML) techniques. To address this, agencies are evaluating an alternative Means of Compliance (MoC) called the Overarching Properties (OP). The goals for this research are to develop recommendations and assurance criteria and to explore safety risk mitigation approaches for such AI/ML-based software systems. This document outlines a novel foundation for the application of OPs to support the assurance and certification of complex aerospace digital systems consisting of AI/ML-based components. To this end, we first select the use case of a Recorder Independent Power Supply (RIPS) system. We then perform a Functional Hazard Assessment (FHA) to identify a set of hazards associated with the RIPS and design a set of appropriate requirements to mitigate those hazards

    Artificial Intelligence as Evidence

    Get PDF
    This article explores issues that govern the admissibility of Artificial Intelligence (“AI”) applications in civil and criminal cases, from the perspective of a federal trial judge and two computer scientists, one of whom also is an experienced attorney. It provides a detailed yet intelligible discussion of what AI is and how it works, a history of its development, and a description of the wide variety of functions that it is designed to accomplish, stressing that AI applications are ubiquitous, both in the private and public sectors. Applications today include: health care, education, employment-related decision-making, finance, law enforcement, and the legal profession. The article underscores the importance of determining the validity of an AI application (i.e., how accurately the AI measures, classifies, or predicts what it is designed to), as well as its reliability (i.e., the consistency with which the AI produces accurate results when applied to the same or substantially similar circumstances), in deciding whether it should be admitted into evidence in civil and criminal cases. The article further discusses factors that can affect the validity and reliability of AI evidence, including bias of various types, “function creep,” lack of transparency and explainability, and the sufficiency of the objective testing of AI applications before they are released for public use. The article next provides an in-depth discussion of the evidentiary principles that govern whether AI evidence should be admitted in court cases, a topic which, at present, is not the subject of comprehensive analysis in decisional law. The focus of this discussion is on providing a step-by-step analysis of the most important issues, and the factors that affect decisions on whether to admit AI evidence. Finally, the article concludes with a discussion of practical suggestions intended to assist lawyers and judges as they are called upon to introduce, object to, or decide on whether to admit AI evidence

    Digital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques

    Get PDF
    Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings

    Digital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques

    Get PDF
    Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings
    corecore