4 research outputs found

    Adaptive timing in a dynamic field architecture for natural human–robot interactions

    Get PDF
    A close temporal coordination of actions and goals is crucial for natural and fluent human–robot interactions in collaborative tasks. How to endow an autonomous robot with a basic temporal cognition capacity is an open question. In this paper, we present a neurodynamics approach based on the theoretical framework of dynamic neural fields (DNF) which assumes that timing processes are closely integrated with other cognitive computations. The continuous evolution of neural population activity towards an attractor state provides an implicit sensation of the passage of time. Highly flexible sensorimotor timing can be achieved through manipulations of inputs or initial conditions that affect the speed with which the neural trajectory evolves. We test a DNF-based control architecture in an assembly paradigm in which an assistant hands over a series of pieces which the operator uses among others in the assembly process. By watching two experts, the robot first learns the serial order and relative timing of object transfers to subsequently substitute the assistant in the collaborative task. A dynamic adaptation rule exploiting a perceived temporal mismatch between the expected and the realized transfer timing allows the robot to quickly adapt its proactive motor timing to the pace of the operator even when an additional assembly step delays a handover. Moreover, the self-stabilizing properties of the population dynamics support the fast internal simulation of acquired task knowledge allowing the robot to anticipate serial order errorsThis work is financed by national funds through FCT – Fundação para a Ciência e a Tecnologia, I.P., within the scope of the projects ‘‘NEUROFIELD’’ (Ref PTDC/MAT-APL/31393/2017), ‘‘I-CATER – Intelligent Robotic Coworker Assistant for Industrial Tasks with an Ergonomics Rationale’’ (Ref PTDC/EEI-ROB/3488/2021) and R&D Units Project Scope: UIDB/00319/2020 – ALGORITMI Research Centre

    Object Handovers: a Review for Robotics

    Full text link
    This article surveys the literature on human-robot object handovers. A handover is a collaborative joint action where an agent, the giver, gives an object to another agent, the receiver. The physical exchange starts when the receiver first contacts the object held by the giver and ends when the giver fully releases the object to the receiver. However, important cognitive and physical processes begin before the physical exchange, including initiating implicit agreement with respect to the location and timing of the exchange. From this perspective, we structure our review into the two main phases delimited by the aforementioned events: 1) a pre-handover phase, and 2) the physical exchange. We focus our analysis on the two actors (giver and receiver) and report the state of the art of robotic givers (robot-to-human handovers) and the robotic receivers (human-to-robot handovers). We report a comprehensive list of qualitative and quantitative metrics commonly used to assess the interaction. While focusing our review on the cognitive level (e.g., prediction, perception, motion planning, learning) and the physical level (e.g., motion, grasping, grip release) of the handover, we briefly discuss also the concepts of safety, social context, and ergonomics. We compare the behaviours displayed during human-to-human handovers to the state of the art of robotic assistants, and identify the major areas of improvement for robotic assistants to reach performance comparable to human interactions. Finally, we propose a minimal set of metrics that should be used in order to enable a fair comparison among the approaches.Comment: Review paper, 19 page

    Experimental testing of the CogLaboration prototype system for fluent Human-Robot object handover interactions

    No full text
    corecore