912 research outputs found

    Scenario-based requirements elicitation for user-centric explainable AI

    Get PDF
    Explainable Artificial Intelligence (XAI) develops technical explanation methods and enable interpretability for human stakeholders on why Artificial Intelligence (AI) and machine learning (ML) models provide certain predictions. However, the trust of those stakeholders into AI models and explanations is still an issue, especially domain experts, who are knowledgeable about their domain but not AI inner workings. Social and user-centric XAI research states it is essential to understand the stakeholder’s requirements to provide explanations tailored to their needs, and enhance their trust in working with AI models. Scenario-based design and requirements elicitation can help bridge the gap between social and operational aspects of a stakeholder early before the adoption of information systems and identify its real problem and practices generating user requirements. Nevertheless, it is still rarely explored the adoption of scenarios in XAI, especially in the domain of fraud detection to supporting experts who are about to work with AI models. We demonstrate the usage of scenario-based requirements elicitation for XAI in a fraud detection context, and develop scenarios derived with experts in banking fraud. We discuss how those scenarios can be adopted to identify user or expert requirements for appropriate explanations in his daily operations and to make decisions on reviewing fraudulent cases in banking. The generalizability of the scenarios for further adoption is validated through a systematic literature review in domains of XAI and visual analytics for fraud detection

    Towards Refined Classifications Driven by SHAP Explanations

    Get PDF
    Machine Learning (ML) models are inherently approximate; as a result, the predictions of an ML model can be wrong. In applications where errors can jeopardize a company's reputation, human experts often have to manually check the alarms raised by the ML models by hand, as wrong or delayed decisions can have a significant business impact. These experts often use interpretable ML tools for the verification of predictions. However, post-prediction verification is also costly. In this paper, we hypothesize that the outputs of interpretable ML tools, such as SHAP explanations, can be exploited by machine learning techniques to improve classifier performance. By doing so, the cost of the post-prediction analysis can be reduced. To confirm our intuition, we conduct several experiments where we use SHAP explanations directly as new features. In particular, by considering nine datasets, we first compare the performance of these "SHAP features" against traditional "base features" on binary classification tasks. Then, we add a second-step classifier relying on SHAP features, with the goal of reducing false-positive and false-negative results of typical classifiers. We show that SHAP explanations used as SHAP features can help to improve classification performance, especially for false-negative reduction

    Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research

    Get PDF
    This survey presents a comprehensive review of current literature on Explainable Artificial Intelligence (XAI) methods for cyber security applications. Due to the rapid development of Internet-connected systems and Artificial Intelligence in recent years, Artificial Intelligence including Machine Learning and Deep Learning has been widely utilized in the fields of cyber security including intrusion detection, malware detection, and spam filtering. However, although Artificial Intelligence-based approaches for the detection and defense of cyber attacks and threats are more advanced and efficient compared to the conventional signature-based and rule-based cyber security strategies, most Machine Learning-based techniques and Deep Learning-based techniques are deployed in the “black-box” manner, meaning that security experts and customers are unable to explain how such procedures reach particular conclusions. The deficiencies of transparencies and interpretability of existing Artificial Intelligence techniques would decrease human users’ confidence in the models utilized for the defense against cyber attacks, especially in current situations where cyber attacks become increasingly diverse and complicated. Therefore, it is essential to apply XAI in the establishment of cyber security models to create more explainable models while maintaining high accuracy and allowing human users to comprehend, trust, and manage the next generation of cyber defense mechanisms. Although there are papers reviewing Artificial Intelligence applications in cyber security areas and the vast literature on applying XAI in many fields including healthcare, financial services, and criminal justice, the surprising fact is that there are currently no survey research articles that concentrate on XAI applications in cyber security. Therefore, the motivation behind the survey is to bridge the research gap by presenting a detailed and up-to-date survey of XAI approaches applicable to issues in the cyber security field. Our work is the first to propose a clear roadmap for navigating the XAI literature in the context of applications in cyber security

    Decision Support Systems

    Get PDF
    Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference

    Explainable sentiment analysis application for social media crisis management in retail

    Get PDF
    Sentiment Analysis techniques enable the automatic extraction of sentiment in social media data, including popular platforms as Twitter. For retailers and marketing analysts, such methods can support the understanding of customers' attitudes towards brands, especially to handle crises that cause behavioural changes in customers, including the COVID-19 pandemic. However, with the increasing adoption of black-box machine learning-based techniques, transparency becomes a need for those stakeholders to understand why a given sentiment is predicted, which is rarely explored for retailers facing social media crises. This study develops an Explainable Sentiment Analysis (XSA) application for Twitter data, and proposes research propositions focused on evaluating such application in a hypothetical crisis management scenario. Particularly, we evaluate, through discussions and a simulated user experiment, the XSA support for understanding customer's needs, as well as if marketing analysts would trust such an application for their decision-making processes. Results illustrate the XSA application can be effective in providing the most important words addressing customers sentiment out of individual tweets, as well as the potential to foster analysts' confidence in such support

    A Rule of Persons, Not Machines: The Limits of Legal Automation

    Get PDF

    eXplainable data processing

    Get PDF
    Seminario realizado en U & P U Patel Department of Computer Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science And Technology (CHARUSAT), Changa-388421, Gujarat, India 2021[EN]Deep Learning y has created many new opportunities, it has unfortunately also become a means for achieving ill-intentioned goals. Fake news, disinformation campaigns, and manipulated images and videos have plagued the internet which has had serious consequences on our society. The myriad of information available online means that it may be difficult to distinguish between true and fake news, leading many users to unknowingly share fake news, contributing to the spread of misinformation. The use of Deep Learning to create fake images and videos has become known as deepfake. This means that there are ever more effective and realistic forms of deception on the internet, making it more difficult for internet users to distinguish reality from fictio

    Interactive visualization of event logs for cybersecurity

    Get PDF
    Hidden cyber threats revealed with new visualization software Eventpa

    Government Augmented Intelligence - The Use of AI to Improve Citizen Relationship Management

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceArtificial Intelligence (AI) is increasingly influencing everyday life and has become a key technology for government to improve citizen relationship management (CitRM): to establish, strengthen and boost the connection and interaction between public administration and citizens, in order to improve public service delivery and achieve effectiveness and efficiency. However, constraints such as the lack of awareness and knowledge regarding the potential of AI can explain the absence of a deeper adoption of this technological breakthrough. Thus, the aim of this study is to provide a better understanding of the abilities and possible uses of AI, more specifically how it can contribute to government augmented intelligence to improve CitRM. For this purpose to be achieved, this dissertation follows a Design Science Research methodology. A theoretical framework is performed on Artificial Intelligence and Government, and a systematic literature review is carried out to disclose relevant applications of AI in government. On the basis of the knowledge acquired, it is proposed a framework with a succinct set of recommendations that provides concrete information on possible AI-based applications and solutions to be adopted and used in Portuguese local government, namely at municipal level. Interviews with local government decision makers and experts in local government relationship with citizens confirm the usefulness and relevance of the framework and its possibility to provide a clear view on how AI can be used to improve CitRM. Despite a few concerns on challenges, opportunities such as optimising processes, improving and modernising public services, promoting citizen engagement and improving the relationship between citizens and municipalities were recognised by interviewees. This way, while helping public administration deal with the complexity surrounding Artificial Intelligence, the framework is expected to promote and underpin the adoption of AI solutions in government. Moreover, by adding knowledge to the use of AI in government, it is expected an intense scientific research and progress on the subject
    corecore