3 research outputs found
When is a Prediction Knowledge?
Within Reinforcement Learning, there is a growing collection of research
which aims to express all of an agent's knowledge of the world through
predictions about sensation, behaviour, and time. This work can be seen not
only as a collection of architectural proposals, but also as the beginnings of
a theory of machine knowledge in reinforcement learning. Recent work has
expanded what can be expressed using predictions, and developed applications
which use predictions to inform decision-making on a variety of synthetic and
real-world problems. While promising, we here suggest that the notion of
predictions as knowledge in reinforcement learning is as yet underdeveloped:
some work explicitly refers to predictions as knowledge, what the requirements
are for considering a prediction to be knowledge have yet to be well explored.
This specification of the necessary and sufficient conditions of knowledge is
important; even if claims about the nature of knowledge are left implicit in
technical proposals, the underlying assumptions of such claims have
consequences for the systems we design. These consequences manifest in both the
way we choose to structure predictive knowledge architectures, and how we
evaluate them. In this paper, we take a first step to formalizing predictive
knowledge by discussing the relationship of predictive knowledge learning
methods to existing theories of knowledge in epistemology. Specifically, we
explore the relationships between Generalized Value Functions and epistemic
notions of Justification and Truth.Comment: Accepted to RLDM 201
What's a Good Prediction? Issues in Evaluating General Value Functions Through Error
Constructing and maintaining knowledge of the world is a central problem for
artificial intelligence research. Approaches to constructing an agent's
knowledge using predictions have received increased amounts of interest in
recent years. A particularly promising collection of research centres itself
around architectures that formulate predictions as General Value Functions
(GVFs), an approach commonly referred to as \textit{predictive knowledge}. A
pernicious challenge for predictive knowledge architectures is determining what
to predict. In this paper, we argue that evaluation methods---i.e., return
error and RUPEE---are not well suited for the challenges of determining what to
predict. As a primary contribution, we provide extended examples that evaluate
predictions in terms of how they are used in further prediction tasks: a key
motivation of predictive knowledge systems. We demonstrate that simply because
a GVF's error is low, it does not necessarily follow the prediction is useful
as a cumulant. We suggest evaluating 1) the relevance of a GVF's features to
the prediction task at hand, and 2) evaluation of GVFs by \textit{how} they are
used. To determine feature relevance, we generalize AutoStep to GTD, producing
a step-size learning method suited to the life-long continual learning settings
that predictive knowledge architectures are commonly deployed in. This paper
contributes a first look into evaluation of predictions through their use, an
integral component of predictive knowledge which is as of yet explored.Comment: Submitted to AAMA
Communicative Capital for Prosthetic Agents
This work presents an overarching perspective on the role that machine
intelligence can play in enhancing human abilities, especially those that have
been diminished due to injury or illness. As a primary contribution, we develop
the hypothesis that assistive devices, and specifically artificial arms and
hands, can and should be viewed as agents in order for us to most effectively
improve their collaboration with their human users. We believe that increased
agency will enable more powerful interactions between human users and next
generation prosthetic devices, especially when the sensorimotor space of the
prosthetic technology greatly exceeds the conventional control and
communication channels available to a prosthetic user. To more concretely
examine an agency-based view on prosthetic devices, we propose a new schema for
interpreting the capacity of a human-machine collaboration as a function of
both the human's and machine's degrees of agency. We then introduce the idea of
communicative capital as a way of thinking about the communication resources
developed by a human and a machine during their ongoing interaction. Using this
schema of agency and capacity, we examine the benefits and disadvantages of
increasing the agency of a prosthetic limb. To do so, we present an analysis of
examples from the literature where building communicative capital has enabled a
progression of fruitful, task-directed interactions between prostheses and
their human users. We then describe further work that is needed to concretely
evaluate the hypothesis that prostheses are best thought of as agents. The
agent-based viewpoint developed in this article significantly extends current
thinking on how best to support the natural, functional use of increasingly
complex prosthetic enhancements, and opens the door for more powerful
interactions between humans and their assistive technologies.Comment: 33 pages, 10 figures; unpublished technical report undergoing peer
revie