24 research outputs found
Coordination with Humans via Strategy Matching
Human and robot partners increasingly need to work together to perform tasks
as a team. Robots designed for such collaboration must reason about how their
task-completion strategies interplay with the behavior and skills of their
human team members as they coordinate on achieving joint goals. Our goal in
this work is to develop a computational framework for robot adaptation to human
partners in human-robot team collaborations. We first present an algorithm for
autonomously recognizing available task-completion strategies by observing
human-human teams performing a collaborative task. By transforming team actions
into low dimensional representations using hidden Markov models, we can
identify strategies without prior knowledge. Robot policies are learned on each
of the identified strategies to construct a Mixture-of-Experts model that
adapts to the task strategies of unseen human partners. We evaluate our model
on a collaborative cooking task using an Overcooked simulator. Results of an
online user study with 125 participants demonstrate that our framework improves
the task performance and collaborative fluency of human-agent teams, as
compared to state of the art reinforcement learning methods.Comment: 2022 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2022
Effects of form and motion on judgments of social robots' animacy, likability, trustworthiness and unpleasantness
One of robot designers' main goals is to make robots as sociable as possible. Aside from improving robots' actual social functions, a great deal of effort is devoted to making them appear lifelike. This is often achieved by endowing the robot with an anthropomorphic body. However, psychological research on the perception of animacy suggests another crucial factor that might also contribute to attributions of animacy: movement characteristics. In the current study, we investigated how the combination of bodily appearance and movement characteristics of a robot can alter people's attributions of animacy, likability, trustworthiness, and unpleasantness. Participants played games of Tic-Tac-Toe against a robot which (1) either possessed a human form or did not, and (2) either exhibited smooth, lifelike movement or did not. Naturalistic motion was judged to be more animate than mechanical motion, but only when the robot resembled a human form. Naturalistic motion improved likeability regardless of the robot's appearance. Finally, a robot with a human form was rated as more disturbing when it moved naturalistically. Robot designers should be aware that movement characteristics play an important role in promoting robots' apparent animacy.This work was partially supported by the Spanish Government through the project call "Aplicaciones de los robots sociales", DPI2011-26980 from the Spanish Ministry of Economy and Competitiveness. Álvaro Castro-González was partially supported by a grant from Universidad Carlos III de Madrid
Multi-Agent Strategy Explanations for Human-Robot Collaboration
As robots are deployed in human spaces, it's important that they are able to
coordinate their actions with the people around them. Part of such coordination
involves ensuring that people have a good understanding of how a robot will act
in the environment. This can be achieved through explanations of the robot's
policy. Much prior work in explainable AI and RL focuses on generating
explanations for single-agent policies, but little has been explored in
generating explanations for collaborative policies. In this work, we
investigate how to generate multi-agent strategy explanations for human-robot
collaboration. We formulate the problem using a generic multi-agent planner,
show how to generate visual explanations through strategy-conditioned landmark
states and generate textual explanations by giving the landmarks to an LLM.
Through a user study, we find that when presented with explanations from our
proposed framework, users are able to better explore the full space of
strategies and collaborate more efficiently with new robot partners
Recommended from our members
Characterizing Drivers' Peripheral Vision via the Functional Field of View for Intelligent Driving Assistance
Previous work has modeled the combination of foveal and peripheral gaze as the Functional Field of View (FFoV), showing a relationship between FFoV degradation and poor driving outcomes making it an object of interest for intelligent driving assistance algorithms.
We study the shape and dynamics of the FFoV using a peripheral detection task in a virtual reality (VR) driving simulator with licensed drivers in urban driving environments. We find that missed targets occurred vertically higher in the driver FoV than hits. This supports a vertically asymmetric (upward-inhibited) shape of the FFoV. Additionally, we show that this asymmetry disappears when the same PDT is conducted in a non-driving setting.
Finally, we examined the dynamics of the FFoV, finding that drivers' peripheral target detection ability is inhibited (general interference rather than tunnel vision) right after saccades but recovers once drivers fixate for some time