6 research outputs found

    On the Sensitivity of Reward Inference to Misspecified Human Models

    Full text link
    Inferring reward functions from human behavior is at the center of value alignment - aligning AI objectives with what we, humans, actually want. But doing so relies on models of how humans behave given their objectives. After decades of research in cognitive science, neuroscience, and behavioral economics, obtaining accurate human models remains an open research topic. This begs the question: how accurate do these models need to be in order for the reward inference to be accurate? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, as we will never have perfect models of human behavior. On the other hand, if as our models improve, we can have a guarantee that reward accuracy also improves, this would show the benefit of more work on the modeling side. We study this question both theoretically and empirically. We do show that it is unfortunately possible to construct small adversarial biases in behavior that lead to arbitrarily large errors in the inferred reward. However, and arguably more importantly, we are also able to identify reasonable assumptions under which the reward inference error can be bounded linearly in the error in the human model. Finally, we verify our theoretical insights in discrete and continuous control tasks with simulated and human data.Comment: 17 pages, 12 figure

    Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans

    Get PDF
    We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda embedding legal knowledge and reasoning in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.Comment: Forthcoming in Northwestern Journal of Technology and Intellectual Property, Volume 2

    ENRICHING COMMUNICATION BETWEEN HUMANS AND AI AGENTS

    Get PDF
    Equipping AI agents with effective, human-compatible communication capabilities is pivotal to enabling them to effectively serve and aid humans. On one hand, agents should understand humans, being able to infer intentions and extract knowledge from language utterances. On the other hand, they should also help humans understand them, conveying (un)certainties and proactively consulting humans when facing difficult situations. This dissertation presents new training and evaluation frameworks that enrich communication between humans and AI agents. These frameworks improve two capabilities of an agent: (1) the ability to learn through natural communication with humans and (2) the ability to request and interpret information from humans during task execution. Regarding the first capability, I study the possibility and challenges of training agents with noisy human ratings. Providing humans with more expressive tools for teaching agents, I propose a framework that employs descriptive language as the teaching medium. On the second capability, I introduce new benchmarks that evaluate an agent’s ability to exchange information with humans to successfully perform indoor navigation tasks. On these benchmarks, I build agents that are capable of requesting rich, contextually useful information and show that they significantly outperform those without such capability. I conclude the dissertation with discussions on how to develop more sophisticated communication capabilities for agents
    corecore