7 research outputs found
A dynamic neural field approach to natural and efficient human-robot collaboration
A major challenge in modern robotics is the design of autonomous robots
that are able to cooperate with people in their daily tasks in a human-like way. We
address the challenge of natural human-robot interactions by using the theoretical
framework of dynamic neural fields (DNFs) to develop processing architectures that
are based on neuro-cognitive mechanisms supporting human joint action. By explaining
the emergence of self-stabilized activity in neuronal populations, dynamic
field theory provides a systematic way to endow a robot with crucial cognitive functions
such as working memory, prediction and decision making . The DNF architecture
for joint action is organized as a large scale network of reciprocally connected
neuronal populations that encode in their firing patterns specific motor behaviors,
action goals, contextual cues and shared task knowledge. Ultimately, it implements
a context-dependent mapping from observed actions of the human onto adequate
complementary behaviors that takes into account the inferred goal of the co-actor.
We present results of flexible and fluent human-robot cooperation in a task in which
the team has to assemble a toy object from its components.The present research was conducted in the context of the fp6-IST2 EU-IP
Project JAST (proj. nr. 003747) and partly financed by the FCT grants POCI/V.5/A0119/2005 and
CONC-REEQ/17/2001. We would like to thank Luis Louro, Emanuel Sousa, Flora Ferreira, Eliana
Costa e Silva, Rui Silva and Toni Machado for their assistance during the robotic experiment
What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations
Goal-directed action selection is the problem of what to do next in order to progress towards
goal achievement. This problem is computationally more complex in case of joint action settings
where two or more agents coordinate their actions in space and time to bring about a common goal:
actions performed by one agent influence the action possibilities of the other agents, and ultimately the
goal achievement. While humans apparently effortlessly engage in complex joint actions, a number of
questions remain to be solved to achieve similar performances in artificial agents: How agents represent
and understand actions being performed by others? How this understanding influences the choice of
agent’s own future actions? How is the interaction process biased by prior information about the task?
What is the role of more abstract cues such as others’ beliefs or intentions?
In the last few years, researchers in computational neuroscience have begun investigating how controltheoretic
models of individual motor control can be extended to explain various complex social phenomena,
including action and intention understanding, imitation and joint action. The two cornerstones of
control-theoretic models of motor control are the goal-directed nature of action and a widespread use of
internal modeling. Indeed, when the control-theoretic view is applied to the realm of social interactions,
it is assumed that inverse and forward internal models used in individual action planning and control
are re-enacted in simulation in order to understand others’ actions and to infer their intentions. This
motor simulation view of social cognition has been adopted to explain a number of advanced mindreading
abilities such as action, intention, and belief recognition, often in contrast with more classical
cognitive theories - derived from rationality principles and conceptual theories of others’ minds - that
emphasize the dichotomy between action and perception.
Here we embrace the idea that implementing mindreading abilities is a necessary step towards a more
natural collaboration between humans and robots in joint tasks. To efficiently collaborate, agents need to
continuously estimate their teammates’ proximal goals and distal intentions in order to choose what to
do next.We present a probabilistic hierarchical architecture for joint action which takes inspiration from
the idea of motor simulation above. The architecture models the casual relations between observables
(e.g., observed movements) and their hidden causes (e.g., action goals, intentions and beliefs) at two
deeply intertwined levels: at the lowest level the same circuitry used to execute my own actions is
re-enacted in simulation to infer and predict (proximal) actions performed by my interaction partner,
while the highest level encodes more abstract task representations which govern each agent’s observable
behavior. Here we assume that the decision of what to do next can be taken by knowing 1) what the
current task is and 2) what my teammate is currently doing. While these could be inferred via a costly
(and inaccurate) process of inverting the generative model above, given the observed data, we will
show how our organization facilitates such an inferential process by allowing agents to share a subset of
hidden variables alleviating the need of complex inferential processes, such as explicit task allocation,
or sophisticated communication strategies