2,182 research outputs found
Recommended from our members
Assessing and Finding Faults in AI: Two Empirical Studies
With the advent of Artificial Intelligence (AI) in every sphere of life in today's day and age, it has become increasingly important for non-AI experts to be able to comprehend the underlying logic of how AI systems work, assess them and find faults in these systems, particularly when they are used in high risk scenarios such as in military strategies and medical applications. Recent developments to address the need to open the black boxes of these AI-powered systems have led to the emergence of AI explanations. There now exist myriad successful explanation methods and tools that attempt to explore and explain how AI systems work. However, a key problem with such work is the lack of a process that users can follow to navigate AI systems along with their explanation. This problem becomes increasingly evident with non-AI experts, due to their lack of context and depth of knowledge of the subject. To address this challenging problem, my colleagues and I propose a new process called AAR/AI or After-Action Review for Artificial Intelligence that aims to bridge this gap between AI systems and non-AI experts. AAR/AI, inspired by the US Defense debriefing strategy called AAR, is a process for understanding, analyzing and navigating sequential decision making environments. This thesis details two human-subjects studies my colleagues and I conducted, one qualitatively and the other quantitatively, to evaluate the effectiveness of AAR/AI in assessing an AI system and in identifying and localizing faults in it. The studies recommend that not only does AAR/AI assist non-AI experts to effectively navigate an AI system and keep their thoughts organized and logical, it also helps them identify and localize faults in it. Participants that used AAR/AI to localize faults did so with far more precision and recall than those that did not. I believe that this is a crucial step towards building democratic and explainable AI systems, and making them accessible to a larger audience that is not familiar with them
Recommended from our members
Explanations and Processes to Enable Humans to Assess AI with Respect to Manipulable Properties
Assessing AI systems is difficult. Humans rely on AI systems in increasing ways, both visible and invisible, meaning a variety of stakeholders need a variety of assessment tools (e.g., a professional auditor, a developer, and an end user all have different needs). We posit that it is possible to provide explanations and assessment processes that enable AI non-experts observing multiple intelligent agents in sequential domains to differentiate the agents with respect to a property (e.g., quality or fairness), as well as articulate justification for their differentiation. Further, we hypothesize that if the property can be manipulated in a highly controllable fashion, then it is possible to measure the quality of an explanation and/or assessment process by its ability to expose that such manipulation has occurred. This dissertation presents our contributions in explanations, processes, and manipulations for assessment. Specifically, we present our investigations into explanations to judge fairness of a classifier, the After-Action Review for AI process to structure explanation consumption, the Ranking task for explanation evaluation, and the Mutant Agent Generation approach for introducing controllable variation. By improving explainability of AI in all these phases, we seek to empower assessors to calibrate trust in the system appropriately
Automated Scenario Generation Environment
Report describes IST\u27s investigation into the feasibility of automating the process of planning and scenario generation for large scale (joint level) simulation exercises and development of an architecture for that purpose
Florida Teletraining Project: Reconfiguration Of Military Courses For Video Teletraining Delivery
Describes the processes and procedures used by the Florida Teletraining Project (FTP) to reconfigure five military courses for delivery over the U.S. Army\u27s Teletraining Network, TNET
Humanising relational knowing: an appreciative action research study on relationship-centred practice on stroke units.
Over the past two decades, NHS stroke services in England have improved the organisation of hospital-based stroke care, leading to improved outcomes after a stroke. However, this drive for improvement has not always been informed by a holistic view of stroke recovery and rehabilitation. Stroke survivors and their carers ask for individualised, person-centred care, with less focus on the physical aspects of their recovery (Stroke Association 2013; Luker et al. 2015). Despite a plethora of national recommendations on person-centred care, there is little actual ‘know how’ on achieving this within stroke services. An appreciative action research (AAR) method was used to develop a relationship- centred care (RCC) approach within a stroke unit setting. It was a two-phase study conducted on two combined acute and rehabilitation stroke units in the south west of England over 20 months. The first phase objectives were to explore and describe participants’ meaningful relational experiences and the processes that supported them. The objective of phase two was to take the processes learnt from phase one and explore whether these could be translated to a second stroke unit. Data were generated from 17 interviews, 400 hours of observations, 10 staff discussion groups, and the researcher’s reflective diary. Initial co-analysis using sense-making with participants was part of the AAR process, with this analysis informing the subsequent phases of the AAR cycles (Cooperrider et al. 2005). Further in-depth analysis was conducted using immersion crystallisation to confirm and broaden the original themes (Borkan 1999). Data analysis was informed by relational constructionist and humanising/lifeworld-led care perspectives (McNamee and Hosking 2012, Galvin & Todres 2013). Data described that participants (patients, relatives and staff) all valued similar relational experiences around human connections to support existential well-being. The AAR process supported changes in self, and the culture on the stroke units, towards an increased value placed on human relationships, including colleague relationships among staff. The processes that supported human connections in practice included: i. sensitising to humanising relational knowing through appreciative noticing; ii. reflecting and sharing these experiences with others to co-create a relational discourse; iii. having the freedom to act, enabling human connections. Developing processes to support humanising relational knowing revealed the complex, experiential and constantly changing nature of this way of knowing. Open reflective and reflexive spaces, created by animation and facilitation, were important to support staff to maintain sensitivity towards relational knowing within an acute care context. The outcomes from this study build on existing humanising/lifeworld-led care theories through: developing orientations for practice that support relational knowing, and; proposing development of the RCC model to include humanising values of embodiment, insiderness and agency
Intervention strategies for the management of human error
This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included
Recommended from our members
How does AAR/AI Support Problem Solvers with Diverse Behaviors and Cognitive Styles?
"What’s wrong with this AI?" Explainable AI (XAI) researchers are moving beyond explaining an AI’s actions, to helping users detect an AI’s failures. However this detection may not be enough—for actionability, we often need to pinpoint which part failed. We investigate how AAR/AI, a structured assessment process, supports users with diverse behaviors and cognitive styles in the context of a fault localization task in a reinforcement learning (RL) agent.
In Study 1’s qualitative investigation, 17 participants engaged in diverse behaviors at all stages of sensemaking. They identified faults using behaviors ranging from ad hoc searching to consistent behavior akin to professional searchers'. Then, they confirmed faults using behaviors ranging from narrow pattern-matching approaches to specification-checking. Last, they reported faults using behaviors from "shrugging" to probing the space of actions the AI considered.
We also performed a secondary analysis of 65 participants on a follow-up controlled experiment (Study 2) and disaggregated their data by their five GenderMag cognitive problem-solving styles. At each endpoint of four of the five cognitive style spectra, participants who used AAR/AI located significantly more faults than those who did not use AAR/AI. These end-points include participants with low- self-efficacy and those with high- self-efficacy; those with task-oriented motivations and those with technology-oriented motivations; those who learn by process and those who learn by tinkering; and, comprehensive information processors and selective information processors. AAR/AI also closed an inclusivity gap between risk-averse and risk-tolerant participants, further demonstrating that AAR/AI supports problem solvers with a wide diversity of behaviors and cognitive styles
Steps toward organic church unity in Protestantism in the United States during the period 1900-1930
Thesis (M.A.)--Boston University, 1931. This item was digitized by the Internet Archive
The 1993 Goddard Conference on Space Applications of Artificial Intelligence
This publication comprises the papers presented at the 1993 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, MD on May 10-13, 1993. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed
- …