5 research outputs found

    A Model for Human-Human Collaborative Object Manipulation and Its Application to Human-Robot Interaction

    No full text
    During collaborative object manipulation, the interaction forces provide a communication channel through which humans coordinate their actions. In order for the robots to engage in physical collaboration with humans, it is necessary to understand this coordination process. Unfortunately, there is no intrinsic way to define the interaction forces. In this study, we propose a model that allows us to compute the interaction force during a dyadic cooperative object manipulation task. The model is derived directly from the existing theories on human arm movements. The results of a user study with 22 human subjects prove the validity of the proposed model. The model is then embedded in a control strategy that enables the robot to engage in a cooperative task with a human. The performance evaluation of the controller through simulation shows that the control strategy is a promising candidate for a cooperative human-robot interaction

    Failure Recovery in Robot-Human Object Handover

    No full text
    Object handover is a common physical interaction between humans. It is thus also of significant interest for human-robot interaction. In this paper, we are focused on robot-to-human object handover. The main challenge in this case is how to reduce the failure rate, i.e., to ensure that the object does not fall (object safety), while at the same time allowing the human to easily acquire the object (smoothness). To endow the robot with a failure recovery mechanism, we investigated how humans detect failure during the transfer phase of the handover. We conducted a human study that showed that a human giver primarily relies on vision rather than haptic sensing to detect the fall of the object. Motivated by this study, a robotic handover system is proposed that consists of a motion sensor attached to the robot's gripper, a force sensor at the base of the gripper, and a controller that is capable of regrasping the object if it starts falling. The proposed system is implemented on a Baxter robot and is shown to achieve a smooth and safe handover

    Failure Recovery in Robot–Human Object Handover

    No full text
    Object handover is a common physical interaction between humans. It is thus also of significant interest for human-robot interaction. In this paper, we are focused on robot-to-human object handover. The main challenge in this case is how to reduce the failure rate, i.e., to ensure that the object does not fall (object safety), while at the same time allowing the human to easily acquire the object (smoothness). To endow the robot with a failure recovery mechanism, we investigated how humans detect failure during the transfer phase of the handover. We conducted a human study that showed that a human giver primarily relies on vision rather than haptic sensing to detect the fall of the object. Motivated by this study, a robotic handover system is proposed that consists of a motion sensor attached to the robot's gripper, a force sensor at the base of the gripper, and a controller that is capable of regrasping the object if it starts falling. The proposed system is implemented on a Baxter robot and is shown to achieve a smooth and safe handover

    Find Task Corpus

    No full text
    The dataset contains annotated interactions between nursing students (labeled HEL) and elderly individuals (labeled ELD) receiving assistance with activities of daily living (ADLs).</p

    Multimodal Reinforcement Learning Human Study

    No full text
    We performed a human user study where 9 healthy adults were recruited to interact with our HEL agent. Each subject performed 4 to 5 trials (entire interactions) with the HEL agent adding up to a total of 42 trials. The hypothetical experiment environment would be a room with a drawer, a shelf, and a cabinet. The user can choose between red, green, and yellow cups and red, green, yellow, and white balls. At the beginning of each trial, objects are randomly scattered in different locations. The user only knows there are aforementioned locations and objects in the room but doesn’t know which item is located where. The subjects were instructed to choose the object of interest at the beginning of the trial, and guide the agent through different locations to find the object of interest.</p
    corecore