100,556 research outputs found
Recommended from our members
Trust and Explanation in Artificial Intelligence Systems: A Healthcare Application in Disease Detection and Preliminary Diagnosis
The way in which Artificial Intelligence (AI) systems reach conclusions is not always transparent to end-users, whether experts or non-experts. This creates serious concerns on the trust that people would place in such systems if they were to be adopted in real-life contexts. These concerns become even bigger when individuals’ well-being is at stake, as in the case of AI technologies applied to healthcare. An emerging research area called Explainable AI (XAI) looks at how to solve this problem by providing a layer of explanation which helps end-users to make sense of AI results. The overall assumption behind XAI research is that explicability can improve end-users’ trust in AI systems. Trusting AI applications may have strong positive economical and societal impact, especially because AI is increasingly demonstrating improved performance in reducing the cost of carrying out highly complex human tasks. However, there are also the over-trusting and under- trusting issues that need to be addressed. Non-expert users have been shown to often over-trust or under-trust AI systems, even when having very little knowledge of the technical competence of the system. Over-trust can have dangerous societal consequences when trust is placed in systems of low or unclear technical competence. Meanwhile, under-trust can hinder AI systems adoption in our every day life.
This doctoral research studies the extent to which explanations and interactions can help non- expert users properly calibrate trust in AI systems, specifically AI for disease detection and preliminary diagnosis. This means reducing trust when users tend to over-trust an unreliable system and increasing trust if the system can be shown to work well. Four user studies were conducted using data collection methods that included literature review, semi-structured interviews, online surveys, and focus groups, following both qualitative and quantitative research approaches and involving medical professionals, AI experts, and non-experts (considered as primary users of the AI system). Through these four user studies, new key features of meaningful explanation were defined, concrete guidelines for designing meaningful explanation were proposed, a new tool for quantitative measurement of trust between humans and AI was generated, and a series of reflections on the complex relationship between explanation and trust were presented.
This doctoral work makes three fundamental contributions to knowledge, that can shape future research in Explainable AI in healthcare. First, it informs how to construct explanations that non-expert users can make sense of (meaningful explanations). Second, it contextualises current XAI research in healthcare, informing how explanations should be designed for AI assisted disease detection and preliminary diagnosis systems (Explanation Design Guidelines). Third, it proposes the first validated survey instrument to measure non-expert users trust in AI healthcare applications. This user-friendly survey method can help future XAI researchers compare results and potentially accelerate the development of more robust XAI research. Finally, this doctoral research provides preliminary insights into the importance of the interaction modality of explanations in influencing trust. Audio-based conversational interaction has been identified as a more promising way to provide health diagnosis explanations to patients than more static, hypertext-based interactions; audio-based conversational XAI interfaces positively affect the ability of laypersons to appropriately calibrate trust to a greater extent than less interactive interfaces. These preliminary findings can inform and promote future research on XAI by shifting the focus of current research from explanation content design to explanation delivery and interaction design
Aided diagnosis of structural pathologies with an expert system
Sustainability and safety are social demands for long-life buildings. Suitable inspection and maintenance tasks on structural elements are needed for keeping buildings safely in service. Any malfunction that causes structural damage could be called pathology by analogy between structural engineering and medicine. Even the easiest evaluation tasks require expensive training periods that may be shortened with a suitable tool. This work presents an expert system (called Doctor House or DH) for diagnosing pathologies of structural elements in buildings. DH differs from other expert systems when it deals with uncertainty in a far easier but still useful way and it is capable of aiding during the initial survey 'in situ', when damage should be detected at a glance. DH is a powerful tool that represents complex knowledge gathered from bibliography and experts. Knowledge codification and uncertainty treatment are the main achievements presented. Finally, DH was tested and validated during real surveys.Peer ReviewedPostprint (author's final draft
Recommended from our members
The rationale of development practices for expert systems : an empirical investigation
Practices of expert system development are not widely investigated. In this paper I describe results of case studies on the inhouse deployment of small expert systems in two companies, along with a review of empirical research. The investigation focuses on the underlying rationale of the observed practices during the stages of design, field transfer and use. The examples show the importance of integrative approaches to technical and organizational aspects of development projects. The remaining potential for organizational turbulences is explained with inherent tensions of the rationale
A Visual Programming Paradigm for Abstract Deep Learning Model Development
Deep learning is one of the fastest growing technologies in computer science
with a plethora of applications. But this unprecedented growth has so far been
limited to the consumption of deep learning experts. The primary challenge
being a steep learning curve for learning the programming libraries and the
lack of intuitive systems enabling non-experts to consume deep learning.
Towards this goal, we study the effectiveness of a no-code paradigm for
designing deep learning models. Particularly, a visual drag-and-drop interface
is found more efficient when compared with the traditional programming and
alternative visual programming paradigms. We conduct user studies of different
expertise levels to measure the entry level barrier and the developer load
across different programming paradigms. We obtain a System Usability Scale
(SUS) of 90 and a NASA Task Load index (TLX) score of 21 for the proposed
visual programming compared to 68 and 52, respectively, for the traditional
programming methods
Lifelong guidance policy and practice in the EU
A study on lifelong guidance (LLG) policy and practice in the EU focusing on trends, challenges and opportunities. Lifelong guidance aims to provide career development support for individuals of all ages, at all career stages. It includes careers information, advice, counselling, assessment of skills and mentoring
Artificial intelligence for multi-mission planetary operations
A brief introduction is given to an automated system called the Spacecraft Health Automated Reasoning Prototype (SHARP). SHARP is designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real-time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. Telecommunications link analysis of the Voyager II spacecraft is the initial focus for evaluation of the prototype in a real-time operations setting during the Voyager spacecraft encounter with Neptune in August, 1989. The preliminary results of the SHARP project and plans for future application of the technology are discussed
- …