1,067 research outputs found
Recommended from our members
Principles of Explanatory Debugging to personalize interactive machine learning
How can end users efficiently influence the predictions that machine learning systems make on their behalf? This paper presents Explanatory Debugging, an approach in which the system explains to users how it made each of its predictions, and the user then explains any necessary corrections back to the learning system. We present the principles underlying this approach and a prototype instantiating it. An empirical evaluation shows that Explanatory Debugging increased participants' understanding of the learning system by 52% and allowed participants to correct its mistakes up to twice as efficiently as participants using a traditional learning system
Recommended from our members
Horses for courses: Making the case for persuasive engagement in smart systems
Current thrusts in explainable AI (XAI) have focused on using interpretability or explanatory debugging as frameworks for developing explanations. We argue that for some systems a different paradigm – persuasive engagement – needs to be adopted, in order to affect trust and user satisfaction. In this paper, we will briefly provide an overview of the current approaches to explain smart systems and their scope of application. We then introduce the theoretical basis for persuasive engagement, and show through a use case how explanations might be generated. We then discuss future work that might shed more light on how to best explain different kinds of smart systems
Harnessing AI to Power Constructivist Learning: An Evolution in Educational Methodologies
This article navigates the confluence of the age-old constructivist philosophy of education and modern Artificial Intelligence (AI) tools as a means of reconceptualizing teaching and learning methods. While constructivism champions active learning derived from personal experiences and prior knowledge, AI’s adaptive capacities seamlessly align with these principles, offering personalized, dynamic, and enriching learning avenues. By leveraging AI platforms such as ChatGPT, BARD, and Microsoft Bing, educators can elevate constructivist pedagogy, fostering enhanced student engagement, self-reflective metacognition, profound conceptual change, and an enriched learning experience. The article further emphasizes the preservation of humanistic values in the integration of AI, ensuring a balanced, ethical, and inclusive educational environment. This exploration sheds light on the transformative potential of inter-twining traditional educational philosophies with technological advancements, paving the way for a more responsive and effective learning paradigm
Recommended from our members
Too much, too little, or just right? Ways explanations impact end users' mental models
Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly "debug" an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users' mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all
LIMEADE: A General Framework for Explanation-Based Human Tuning of Opaque Machine Learners
Research in human-centered AI has shown the benefits of systems that can
explain their predictions. Methods that allow humans to tune a model in
response to the explanations are similarly useful. While both capabilities are
well-developed for transparent learning models (e.g., linear models and GA2Ms),
and recent techniques (e.g., LIME and SHAP) can generate explanations for
opaque models, no method for tuning opaque models in response to explanations
has been user-tested to date. This paper introduces LIMEADE, a general
framework for tuning an arbitrary machine learning model based on an
explanation of the model's prediction. We demonstrate the generality of our
approach with two case studies. First, we successfully utilize LIMEADE for the
human tuning of opaque image classifiers. Second, we apply our framework to a
neural recommender system for scientific papers on a public website and report
on a user study showing that our framework leads to significantly higher
perceived user control, trust, and satisfaction. Analyzing 300 user logs from
our publicly-deployed website, we uncover a tradeoff between canonical greedy
explanations and diverse explanations that better facilitate human tuning.Comment: 16 pages, 7 figure
- …