12 research outputs found

    NOTION OF EXPLAINABLE ARTIFICIAL INTELLIGENCE - AN EMPIRICAL INVESTIGATION FROM A USER\u27S PERSPECTIVE

    Get PDF
    The growing attention on artificial intelligence-based decision-making has led to research interest in the explainability and interpretability of machine learning models, algorithmic transparency, and comprehensibility. This renewed attention on XAI advocates the need to investigate end user-centric explainable AI, due to the universal adoption of AI-based systems at the root level. Therefore, this paper investigates user-centric explainable AI from a recommendation systems context. We conducted focus group interviews to collect qualitative data on the recommendation system. We asked participants about the end users\u27 comprehension of a recommended item, its probable explanation and their opinion of making a recommendation explainable. Our finding reveals end users want a non-technical and tailor-made explanation with on-demand supplementary information. Moreover, we also observed users would like to have an explanation about personal data usage, detailed user feedback, authentic and reliable explanations. Finally, we proposed a synthesized framework that will include end users in the XAI development process

    TO EXPLAIN OR NOT TO EXPLAIN: AN EMPIRICAL INVESTIGATION OF AI-BASED RECOMMENDATIONS ON SOCIAL MEDIA PLATFORMS

    Get PDF
    AI-based social media recommendations have a great potential to improve user experience. However, often these recommendations do not match the user interest and create an unpleasant experience for the users. Moreover, the recommendation system being blackbox creates comprehensibility and transparency issues. This paper investigates social media recommendations from an end-user perspective. For the investigation, we used the popular social media platform Facebook and recruited regular users to conduct a qualitative analysis. We asked participants about the social media content suggestions, their comprehensibility, and explainability. Our analysis shows users mostly require explanation whenever they encounter unfamiliar content and to ensure their online data security. Furthermore, the users require concise, non-technical explanations along with the facility of controlled information flow. In addition, we observed that explanations impact the userā€™s perception of transparency, trust, and understandability. Finally, we have outlined some design implications and presented a synthesized framework based on our data analysis

    Explainable Artificial Intelligence (XAI) from a user perspective- A synthesis of prior literature and problematizing avenues for future research

    Full text link
    The final search query for the Systematic Literature Review (SLR) was conducted on 15th July 2022. Initially, we extracted 1707 journal and conference articles from the Scopus and Web of Science databases. Inclusion and exclusion criteria were then applied, and 58 articles were selected for the SLR. The findings show four dimensions that shape the AI explanation, which are format (explanation representation format), completeness (explanation should contain all required information, including the supplementary information), accuracy (information regarding the accuracy of the explanation), and currency (explanation should contain recent information). Moreover, along with the automatic representation of the explanation, the users can request additional information if needed. We have also found five dimensions of XAI effects: trust, transparency, understandability, usability, and fairness. In addition, we investigated current knowledge from selected articles to problematize future research agendas as research questions along with possible research paths. Consequently, a comprehensive framework of XAI and its possible effects on user behavior has been developed

    Towards a GDPR-Compliant Blockchain-Based COVID Vaccination Passport

    No full text
    The COVID-19 pandemic has shaken the world and limited work/personal life activities. Besides the loss of human lives and agony faced by humankind, the pandemic has badly hit different sectors economically, including the travel industry. Special arrangements, including COVID test before departure and on arrival, and voluntary quarantine, were enforced to limit the risk of transmission. However, the hope for returning to a normal (pre-COVID) routine relies on the success of the current COVID vaccination drives administered by different countries. To open for tourism and other necessary travel, a need is realized for a universally accessible proof of COVID vaccination, allowing travelers to cross the borders without any hindrance. This paper presents an architectural framework for a GDPR-compliant blockchain-based COVID vaccination passport (VacciFi), whilst considering the relevant developments, especially in the European Union region

    Deep Learning Techniques for Quantiļ¬cation of Tumour Necrosis in Post-neoadjuvant Chemotherapy Osteosarcoma Resection Specimens for Eļ¬€ective Treatment Planning

    No full text
    Osteosarcoma is a high-grade malignant bone tumour for which neoadjuvant chemotherapy is a vital component of the treatment plan. Chemotherapy brings about the death of tumour tissues, and the rate of their death is an essential factor in deciding on further treatment. The necrosis quantiļ¬cation is now done manually by visualizing tissue sections through the microscope. This is a crude method that can cause signiļ¬cant inter-observer bias. The suggested system is an AI-based therapeutic decision-making tool that can automatically calculate the quantity of such dead tissue present in a tissue specimen. We employ U-Net++ and DeepLabv3+, pre-trained deep learning algorithms for the segmentation purpose. ResNet50 and ResNet101 are used as encoder parts of U-Net++ and DeepLabv3+, respectively. Also, we synthesize a dataset of 555 patches from 37 images captured and manually annotated by experienced pathologists. Dice loss and Intersection over Union (IoU) are used as the performance metrics. The training and testing IoU of U-Net++ are 91.78% and 82.64%, and its loss is 4.4% and 17.77%, respectively. The IoU and loss of DeepLabv3+ are 91.09%, 81.50%, 4.77%, and 17.8%, respectively. The results show that both models perform almost similarly. With the help of this tool, necrosis segmentation can be done more accurately while requiring less work and time. The percentage of segmented regions can be used as the decision-making factor in the further treatment plans

    A remote and costā€optimized voting system using blockchain and smart contract

    No full text
    Abstract Traditional voting procedures are nonā€remote, timeā€consuming, and less secure. While the voter believes their vote was submitted successfully, the authority does not provide evidence that the vote was counted and tallied. In most cases, the anonymity of a voter is also not sure, as the voter's details are included in the ballot papers. Many voters consider this voting system untrustworthy and manipulative, discouraging them from voting, and consequently, an election loses a significant number of participants. Although the inclusion of electronic voting systems (EVS) has increased efficiency; however, it has raised concerns over security, legitimacy, and transparency. To mitigate these problems, blockchain technology has been leveraged and smart contract facilities with a combination of artificial intelligence (AI) to propose a remote voting system that makes the overall voting procedure transparent, semiā€decentralized, and secure. In addition, a system that aids in boosting the number of turnouts in an election through an incentivization policy for the voters have also developed. Through the proposed virtual campaigning feature, the authority can generate a decent amount of revenue, which downsizes the overall cost of an election. To reduce the associated cost of transactions using smart contracts, this system implements a hybrid storage system where only a few cardinal data are stored in the blockchain network
    corecore