Error discovery through human-artificial intelligence collaboration

Abstract

Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 181-202).While there has been a recent rise in increasingly effective human-Al teams in areas such as autonomous driving, manufacturing, and robotics, many catastrophic failures still occur. Understanding the cause(s) of these errors is crucial for reducing and fixing them. One source of error is due to an agent's or human's limited view of the world, which means their representations are insufficient for acting safely. For example, self-driving cars may have limited sensing that causes them to not recognize rare vehicle types, like emergency vehicles. This thesis focuses on identifying errors that occur due to deficiencies in agent and human representations. In the first part, we develop an approach that uses human feedback to identify agent errors that occur due to an agent's limited state representation, meaning that the agent cannot observe all features of the world. Experiments show that using our model, an agent discovers error regions and is able to query for human help intelligently to safely act in the real world. In the second part, we focus on determining the cause of human errors as either occurring due to the human's flawed observation of the world or due to other factors, such as noise or insufficient training. We present a generative model that approximates the human's decision-making process and show that we can infer the latent error sources with a limited amount of human demonstration data. In the final thesis component, we tackle the setting where both an agent and a human have rich perception, but due to selective attention, they each only focus on a subset of features. When deploying these learned policies, important features in the real world may be ignored because the simulator did not accurately model all regions of the real world. Our approach is able to identify scenarios in which an agent should transfer control to a human who may be better suited to act, leading to safe joint execution in the world.by Ramya Ramakrishnan.Ph. D.Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienc

    Similar works

    Full text

    thumbnail-image

    Available Versions