4 research outputs found
Multitask variational autoencoding of human-to-human object handover
Assistive robots that operate alongside humans require the ability to understand and replicate human behaviours during a handover. A handover is defined as a joint action between two participants in which a giver hands an object over to the receiver. In this paper, we present a method for learning human-to-human handovers observed from motion capture data. Given the giver and receiver pose from a single timestep, and the object label in the form of a word embedding, our Multitask Variational Autoencoder jointly forecasts their pose as well as the orientation of the object held by the giver at handover. Our method is in large contrast to existing works for human pose forecasting that employ deep autoregressive models requiring a sequence of inputs. Furthermore, our method is novel in that it learns both the human pose and object orientation in a joint manner. Experimental results on the publicly available Handover Orientation and Motion Capture Dataset show that our proposed method outperforms the autoregressive baselines for handover pose forecasting by approximately 20% while being on-par for object orientation prediction with a runtime that is 5x faster.
Affordance-Aware Handovers With Human Arm Mobility Constraints
Reasoning about object handover configurations allows an assistive agent to
estimate the appropriateness of handover for a receiver with different arm
mobility capacities. While there are existing approaches for estimating the
effectiveness of handovers, their findings are limited to users without arm
mobility impairments and to specific objects. Therefore, current
state-of-the-art approaches are unable to hand over novel objects to receivers
with different arm mobility capacities. We propose a method that generalises
handover behaviours to previously unseen objects, subject to the constraint of
a user's arm mobility levels and the task context. We propose a
heuristic-guided hierarchically optimised cost whose optimisation adapts object
configurations for receivers with low arm mobility. This also ensures that the
robot grasps consider the context of the user's upcoming task, i.e., the usage
of the object. To understand preferences over handover configurations, we
report on the findings of an online study, wherein we presented different
handover methods, including ours, to users with different levels of arm
mobility. We find that people's preferences over handover methods are
correlated to their arm mobility capacities. We encapsulate these preferences
in a statistical relational model (SRL) that is able to reason about the most
suitable handover configuration given a receiver's arm mobility and upcoming
task. Using our SRL model, we obtained an average handover accuracy of
when generalising handovers to novel objects.Comment: Accepted for RA-L 202
Object Handovers: a Review for Robotics
This article surveys the literature on human-robot object handovers. A
handover is a collaborative joint action where an agent, the giver, gives an
object to another agent, the receiver. The physical exchange starts when the
receiver first contacts the object held by the giver and ends when the giver
fully releases the object to the receiver. However, important cognitive and
physical processes begin before the physical exchange, including initiating
implicit agreement with respect to the location and timing of the exchange.
From this perspective, we structure our review into the two main phases
delimited by the aforementioned events: 1) a pre-handover phase, and 2) the
physical exchange. We focus our analysis on the two actors (giver and receiver)
and report the state of the art of robotic givers (robot-to-human handovers)
and the robotic receivers (human-to-robot handovers). We report a comprehensive
list of qualitative and quantitative metrics commonly used to assess the
interaction. While focusing our review on the cognitive level (e.g.,
prediction, perception, motion planning, learning) and the physical level
(e.g., motion, grasping, grip release) of the handover, we briefly discuss also
the concepts of safety, social context, and ergonomics. We compare the
behaviours displayed during human-to-human handovers to the state of the art of
robotic assistants, and identify the major areas of improvement for robotic
assistants to reach performance comparable to human interactions. Finally, we
propose a minimal set of metrics that should be used in order to enable a fair
comparison among the approaches.Comment: Review paper, 19 page
Reasoning and understanding grasp affordances for robot manipulation
This doctoral research focuses on developing new methods that enable an artificial agent
to grasp and manipulate objects autonomously. More specifically, we are using the concept
of affordances to learn and generalise robot grasping and manipulation techniques. [75] defined affordances as the ability of an agent to perform a certain action with an object in a
given environment. In robotics, affordances defines the possibility of an agent to perform
actions with an object. Therefore, by understanding the relation between actions, objects
and the effect of these actions, the agent understands the task at hand, providing the robot
with the potential to bridge perception to action. The significance of affordances in robotics
has been studied from varied perspectives, such as psychology and cognitive sciences.
Many efforts have been made to pragmatically employ the concept of affordances as it
provides the potential for an artificial agent to perform tasks autonomously. We start by reviewing and finding common ground amongst different strategies that use affordances for
robotic tasks. We build on the identified grounds to provide guidance on including the concept of affordances as a medium to boost autonomy for an artificial agent. To this end, we
outline common design choices to build an affordance relation; and their implications on
the generalisation capabilities of the agent when facing previously unseen scenarios. Based
on our exhaustive review, we conclude that prior research on object affordance detection
is effective, however, among others, it has the following technical gaps: (i) the methods are
limited to a single object ↔ affordance hypothesis, and (ii) they cannot guarantee task completion or any level of performance for the manipulation task alone nor (iii) in collaboration
with other agents. In this research thesis, we propose solutions to these technical challenges.
In an incremental fashion, we start by addressing the limited generalisation capabilities
of, at the time state-of-the-art methods, by strengthening the perception to action connection through the construction of an Knowledge Base (KB). We then leverage the information
encapsulated in the KB to design and implement a reasoning and understanding method
based on statistical relational leaner (SRL) that allows us to cope with uncertainty in testing
environments, and thus, improve generalisation capabilities in affordance-aware manipulation tasks. The KB in conjunctions with our SRL are the base for our designed solutions
that guarantee task completion when the robot is performing a task alone as well as when in
collaboration with other agents. We finally expose and discuss a range of interesting avenues
that have the potential to thrive the capabilities of a robotic agent through the use of the
concept of affordances for manipulation tasks. A summary of the contributions of this thesis
can be found at: https://bit.ly/grasp_affordance_reasonin