63,071 research outputs found
Enabling Robots to Communicate their Objectives
The overarching goal of this work is to efficiently enable end-users to
correctly anticipate a robot's behavior in novel situations. Since a robot's
behavior is often a direct result of its underlying objective function, our
insight is that end-users need to have an accurate mental model of this
objective function in order to understand and predict what the robot will do.
While people naturally develop such a mental model over time through observing
the robot act, this familiarization process may be lengthy. Our approach
reduces this time by having the robot model how people infer objectives from
observed behavior, and then it selects those behaviors that are maximally
informative. The problem of computing a posterior over objectives from observed
behavior is known as Inverse Reinforcement Learning (IRL), and has been applied
to robots learning human objectives. We consider the problem where the roles of
human and robot are swapped. Our main contribution is to recognize that unlike
robots, humans will not be exact in their IRL inference. We thus introduce two
factors to define candidate approximate-inference models for human learning in
this setting, and analyze them in a user study in the autonomous driving
domain. We show that certain approximate-inference models lead to the robot
generating example behaviors that better enable users to anticipate what it
will do in novel situations. Our results also suggest, however, that additional
research is needed in modeling how humans extrapolate from examples of robot
behavior.Comment: RSS 201
How could a rational analysis model explain?
Rational analysis is an influential but contested account of how probabilistic modeling can be used to construct non-mechanistic but self-standing explanatory models of the mind. In this paper, I disentangle and assess several possible explanatory contributions which could be attributed to rational analysis. Although existing models suffer from evidential problems that question their explanatory power, I argue that rational analysis modeling can complement mechanistic theorizing by providing models of environmental affordances
SkILL - a Stochastic Inductive Logic Learner
Probabilistic Inductive Logic Programming (PILP) is a rel- atively unexplored
area of Statistical Relational Learning which extends classic Inductive Logic
Programming (ILP). This work introduces SkILL, a Stochastic Inductive Logic
Learner, which takes probabilistic annotated data and produces First Order
Logic theories. Data in several domains such as medicine and bioinformatics
have an inherent degree of uncer- tainty, that can be used to produce models
closer to reality. SkILL can not only use this type of probabilistic data to
extract non-trivial knowl- edge from databases, but it also addresses
efficiency issues by introducing a novel, efficient and effective search
strategy to guide the search in PILP environments. The capabilities of SkILL
are demonstrated in three dif- ferent datasets: (i) a synthetic toy example
used to validate the system, (ii) a probabilistic adaptation of a well-known
biological metabolism ap- plication, and (iii) a real world medical dataset in
the breast cancer domain. Results show that SkILL can perform as well as a
deterministic ILP learner, while also being able to incorporate probabilistic
knowledge that would otherwise not be considered
- …