32 research outputs found
Heterogeneous Learning from Demonstration
The development of human-robot systems able to leverage the strengths of both
humans and their robotic counterparts has been greatly sought after because of
the foreseen, broad-ranging impact across industry and research. We believe the
true potential of these systems cannot be reached unless the robot is able to
act with a high level of autonomy, reducing the burden of manual tasking or
teleoperation. To achieve this level of autonomy, robots must be able to work
fluidly with its human partners, inferring their needs without explicit
commands. This inference requires the robot to be able to detect and classify
the heterogeneity of its partners. We propose a framework for learning from
heterogeneous demonstration based upon Bayesian inference and evaluate a suite
of approaches on a real-world dataset of gameplay from StarCraft II. This
evaluation provides evidence that our Bayesian approach can outperform
conventional methods by up to 12.8
Q-CP: Learning Action Values for Cooperative Planning
Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance
Natural language generation for social robotics: Opportunities and challenges
In the increasingly popular and diverse research area of social robotics, the primary goal is to develop robot agents that exhibit
socially intelligent behaviour while interacting in a face-to-face context with human partners. An important aspect of face-to-face
social conversation is fluent, flexible linguistic interaction: as Bavelas et al. [1] point out, face-to-face dialogue is both the basic
form of human communication and the richest and most flexible, combining unrestricted verbal expression with meaningful
non-verbal acts such as gestures and facial displays, along with instantaneous, continuous collaboration between the speaker
and the listener. In practice, however, most developers of social robots tend not to use the full possibilities of the unrestricted
verbal expression afforded by face-to-face conversation; instead, they generally tend to employ relatively simplistic processes
for choosing the words for their robots to say. This contrasts with the work carried out Natural Language Generation (NLG), the
field of computational linguistics devoted to the automated production of high-quality linguistic content: while this research area
is also an active one, in general most effort in NLG is focussed on producing high-quality written text. This article summarises
the state-of-the-art in the two individual research areas of social robotics and natural language generation. It then discusses
the reasons why so few current social robots make use of more sophisticated generation techniques. Finally, an approach is
proposed to bringing some aspects of NLG into social robotics, concentrating on techniques and tools that are most appropriate
to the needs of socially interactive robots