6 research outputs found

    NOTION OF EXPLAINABLE ARTIFICIAL INTELLIGENCE - AN EMPIRICAL INVESTIGATION FROM A USER\u27S PERSPECTIVE

    Get PDF
    The growing attention on artificial intelligence-based decision-making has led to research interest in the explainability and interpretability of machine learning models, algorithmic transparency, and comprehensibility. This renewed attention on XAI advocates the need to investigate end user-centric explainable AI, due to the universal adoption of AI-based systems at the root level. Therefore, this paper investigates user-centric explainable AI from a recommendation systems context. We conducted focus group interviews to collect qualitative data on the recommendation system. We asked participants about the end users\u27 comprehension of a recommended item, its probable explanation and their opinion of making a recommendation explainable. Our finding reveals end users want a non-technical and tailor-made explanation with on-demand supplementary information. Moreover, we also observed users would like to have an explanation about personal data usage, detailed user feedback, authentic and reliable explanations. Finally, we proposed a synthesized framework that will include end users in the XAI development process

    Explainable Artificial Intelligence (XAI) from a user perspective- A synthesis of prior literature and problematizing avenues for future research

    Full text link
    The final search query for the Systematic Literature Review (SLR) was conducted on 15th July 2022. Initially, we extracted 1707 journal and conference articles from the Scopus and Web of Science databases. Inclusion and exclusion criteria were then applied, and 58 articles were selected for the SLR. The findings show four dimensions that shape the AI explanation, which are format (explanation representation format), completeness (explanation should contain all required information, including the supplementary information), accuracy (information regarding the accuracy of the explanation), and currency (explanation should contain recent information). Moreover, along with the automatic representation of the explanation, the users can request additional information if needed. We have also found five dimensions of XAI effects: trust, transparency, understandability, usability, and fairness. In addition, we investigated current knowledge from selected articles to problematize future research agendas as research questions along with possible research paths. Consequently, a comprehensive framework of XAI and its possible effects on user behavior has been developed

    Towards a GDPR-Compliant Blockchain-Based COVID Vaccination Passport

    No full text
    The COVID-19 pandemic has shaken the world and limited work/personal life activities. Besides the loss of human lives and agony faced by humankind, the pandemic has badly hit different sectors economically, including the travel industry. Special arrangements, including COVID test before departure and on arrival, and voluntary quarantine, were enforced to limit the risk of transmission. However, the hope for returning to a normal (pre-COVID) routine relies on the success of the current COVID vaccination drives administered by different countries. To open for tourism and other necessary travel, a need is realized for a universally accessible proof of COVID vaccination, allowing travelers to cross the borders without any hindrance. This paper presents an architectural framework for a GDPR-compliant blockchain-based COVID vaccination passport (VacciFi), whilst considering the relevant developments, especially in the European Union region
    corecore