1,925 research outputs found
Object Transfer Point Estimation for Prompt Human to Robot Handovers
Handing over objects is the foundation of many human-robot interaction and collaboration tasks. In the scenario where a human is handing over an object to a robot, the human chooses where the object needs to be transferred. The robot needs to accurately predict this point of transfer to reach out proactively, instead of waiting for the final position to be presented. We first conduct a human-to-robot handover motion study to analyze the effect of user height, arm length, position, orientation and robot gaze on the object transfer point. Our study presents new observations on the effect of robot\u27s gaze on the point of object transfer. Next, we present an efficient method for predicting the Object Transfer Point (OTP), which synthesizes (1) an offline OTP calculated based on human preferences observed in the human-robot motion study with (2) a dynamic OTP predicted based on the observed human motion. Our proposed OTP predictor is implemented on a humanoid nursing robot and experimentally validated in human-robot handover tasks. Compared to using only static or dynamic OTP estimators, it has better accuracy at the earlier phase of handover (up to 45% of the handover motion) and can render fluent handovers with a reach-to-grasp response time (about 3.1 secs) close to natural human receiver\u27s response. In addition, the OTP prediction accuracy is maintained across the robot\u27s visible workspace by utilizing a user-adaptive reference frame
Spatial representation for planning and executing robot behaviors in complex environments
Robots are already improving our well-being and productivity in
different applications such as industry, health-care and indoor
service applications. However, we are still far from developing (and
releasing) a fully functional robotic agent that can autonomously
survive in tasks that require human-level
cognitive capabilities. Robotic systems on the market, in fact, are
designed to address specific applications, and can only run
pre-defined behaviors to robustly repeat few tasks (e.g., assembling
objects parts, vacuum cleaning). They internal representation of the
world is usually constrained to the task they are performing, and
does not allows for generalization to other
scenarios. Unfortunately, such a paradigm only apply to a very
limited set of domains, where the environment can be assumed to be
static, and its dynamics can be handled before
deployment. Additionally, robots configured in this way will
eventually fail if their "handcrafted'' representation of the
environment does not match the external world.
Hence, to enable more sophisticated cognitive skills, we investigate
how to design robots to properly represent the environment and
behave accordingly. To this end, we formalize a representation of
the environment that enhances the robot spatial knowledge to
explicitly include a representation of its own actions. Spatial
knowledge constitutes the core of the robot understanding of the
environment, however it is not sufficient to represent what the
robot is capable to do in it. To overcome such a limitation, we
formalize SK4R, a spatial knowledge representation for robots which
enhances spatial knowledge with a novel and "functional"
point of view that explicitly models robot actions. To this end, we
exploit the concept of affordances, introduced to express
opportunities (actions) that objects offer to an agent. To encode
affordances within SK4R, we define the "affordance
semantics" of actions that is used to annotate an environment, and
to represent to which extent robot actions support goal-oriented
behaviors.
We demonstrate the benefits of a functional representation of the
environment in multiple robotic scenarios that traverse and
contribute different research topics relating to: robot knowledge
representations, social robotics, multi-robot systems and robot
learning and planning. We show how a domain-specific representation,
that explicitly encodes affordance semantics, provides the robot
with a more concrete understanding of the environment and of the
effects that its actions have on it. The goal of our work is to
design an agent that will no longer execute an action, because of
mere pre-defined routine, rather, it will execute an actions because
it "knows'' that the resulting state leads one step closer to
success in its task
An improvement of robot stiffness-adaptive skill primitive generalization using the surface electromyography in human–robot collaboration
Learning from Demonstration in robotics has proved its efficiency in robot skill learning. The generalization goals of most skill expression models in real scenarios are specified by humans or associated with other perceptual data. Our proposed framework using the Probabilistic Movement Primitives (ProMPs) modeling to resolve the shortcomings of the previous research works; the coupling between stiffness and motion is inherently established in a single model. Such a framework can request a small amount of incomplete observation data to infer the entire skill primitive. It can be used as an intuitive generalization command sending tool to achieve collaboration between humans and robots with human-like stiffness modulation strategies on either side. Experiments (human–robot hand-over, object matching, pick-and-place) were conducted to prove the effectiveness of the work. Myo armband and Leap motion camera are used as surface electromyography (sEMG) signal and motion capture sensors respective in the experiments. Also, the experiments show that the proposed framework strengthened the ability to distinguish actions with similar movements under observation noise by introducing the sEMG signal into the ProMP model. The usage of the mixture model brings possibilities in achieving automation of multiple collaborative tasks
- …