12 research outputs found

    Differences of Human Perceptions of a Robot Moving using Linear or Slow in, Slow out Velocity Profiles When Performing a Cleaning Task

    Get PDF
    We investigated how a robot moving with different velocity profiles affects a person's perception of it when working together on a task. The two profiles are the standard linear profile and a profile based on the animation principles of slow in, slow out. The investigation was accomplished by running an experiment in a home context where people and the robot cooperated on a clean-up task. We used the Godspeed series of questionnaires to gather people's perception of the robot. Average scores for each series appear not to be different enough to reject the null hypotheses, but looking at the component items provides paths to future areas of research. We also discuss the scenario for the experiment and how it may be used for future research into using animation techniques for moving robots and improving the legibility of a robot's locomotion

    Enabling Robots to Communicate their Objectives

    Full text link
    The overarching goal of this work is to efficiently enable end-users to correctly anticipate a robot's behavior in novel situations. Since a robot's behavior is often a direct result of its underlying objective function, our insight is that end-users need to have an accurate mental model of this objective function in order to understand and predict what the robot will do. While people naturally develop such a mental model over time through observing the robot act, this familiarization process may be lengthy. Our approach reduces this time by having the robot model how people infer objectives from observed behavior, and then it selects those behaviors that are maximally informative. The problem of computing a posterior over objectives from observed behavior is known as Inverse Reinforcement Learning (IRL), and has been applied to robots learning human objectives. We consider the problem where the roles of human and robot are swapped. Our main contribution is to recognize that unlike robots, humans will not be exact in their IRL inference. We thus introduce two factors to define candidate approximate-inference models for human learning in this setting, and analyze them in a user study in the autonomous driving domain. We show that certain approximate-inference models lead to the robot generating example behaviors that better enable users to anticipate what it will do in novel situations. Our results also suggest, however, that additional research is needed in modeling how humans extrapolate from examples of robot behavior.Comment: RSS 201

    Deep neural network approach in human-like redundancy optimization for anthropomorphic manipulators

    Get PDF
    © 2013 IEEE. Human-like behavior has emerged in the robotics area for improving the quality of Human-Robot Interaction (HRI). For the human-like behavior imitation, the kinematic mapping between a human arm and robot manipulator is one of the popular solutions. To fulfill this requirement, a reconstruction method called swivel motion was adopted to achieve human-like imitation. This approach aims at modeling the regression relationship between robot pose and swivel motion angle. Then it reaches the human-like swivel motion using its redundant degrees of the manipulator. This characteristic holds for most of the redundant anthropomorphic robots. Although artificial neural network (ANN) based approaches show moderate robustness, the predictive performance is limited. In this paper, we propose a novel deep convolutional neural network (DCNN) structure for reconstruction enhancement and reducing online prediction time. Finally, we utilized the trained DCNN model for managing redundancy control a 7 DoFs anthropomorphic robot arm (LWR4+, KUKA, Germany) for validation. A demonstration is presented to show the human-like behavior on the anthropomorphic manipulator. The proposed approach can also be applied to control other anthropomorphic robot manipulators in industry area or biomedical engineering

    Biological Plausibility of Arm Postures Influences the Controllability of Robotic Arm Teleoperation

    Get PDF
    International audienceObjective: We investigated how participants controlling a humanoid robotic arm's 3D endpoint position by moving their own hand are influenced by the robot's postures. We hypothesized that control would be facilitated (impeded) by biologically plausible (implausible) postures of the robot. Background: Kinematic redundancy, whereby different arm postures achieve the same goal, is such that a robotic arm or prosthesis could theoretically be controlled with less signals than constitutive joints. However, congruency between a robot's motion and our own is known to interfere with movement production. Hence, we expect the human-likeness of a robotic arm's postures during endpoint teleoperation to influence controllability. Method: Twenty-two able-bodied participants performed a target-reaching task with a robotic arm whose endpoint's 3D position was controlled by moving their own hand. They completed a two-condition experiment corresponding to the robot displaying either biologically plausible or implausible postures. Results: Upon initial practice in the experiment's first part, endpoint trajectories were faster and shorter when the robot displayed human-like postures. However, these effects did not persist in the second part, where performance with implausible postures appeared to have benefited from initial practice with plausible ones. Conclusion: Humanoid robotic arm endpoint control is impaired by biologically implausible joint coordinations during initial familiarization but not afterwards, suggesting that the human-likeness of a robot's postures is more critical for control in this initial period. Application: These findings provide insight for the design of robotic arm teleoperation and prosthesis control schemes, in order to favor better familiarization and control from their users

    Human-Robot Collaboration in Automotive Assembly

    Get PDF
    In the past decades, automation in the automobile production line has significantly increased the efficiency and quality of automotive manufacturing. However, in the automotive assembly stage, most tasks are still accomplished manually by human workers because of the complexity and flexibility of the tasks and the high dynamic unconstructed workspace. This dissertation is proposed to improve the level of automation in automotive assembly by human-robot collaboration (HRC). The challenges that eluded the automation in automotive assembly including lack of suitable collaborative robotic systems for the HRC, especially the compact-size high-payload mobile manipulators; teaching and learning frameworks to enable robots to learn the assembly tasks, and how to assist humans to accomplish assembly tasks from human demonstration; task-driving high-level robot motion planning framework to make the trained robot intelligently and adaptively assist human in automotive assembly tasks. The technical research toward this goal has resulted in several peer-reviewed publications. Achievements include: 1) A novel collaborative lift-assist robot for automotive assembly; 2) Approaches of vision-based robot learning of placing tasks from human demonstrations in assembly; 3) Robot learning of assembly tasks and assistance from human demonstrations using Convolutional Neural Network (CNN); 4) Robot learning of assembly tasks and assistance from human demonstrations using Task Constraint-Guided Inverse Reinforcement Learning (TC-IRL); 5) Robot learning of assembly tasks from non-expert demonstrations via Functional Objective-Oriented Network (FOON); 6) Multi-model sampling-based motion planning for trajectory optimization with execution consistency in manufacturing contexts. The research demonstrates the feasibility of a parallel mobile manipulator, which introduces novel conceptions to industrial mobile manipulators for smart manufacturing. By exploring the Robot Learning from Demonstration (RLfD) with both AI-based and model-based approaches, the research also improves robots’ learning capabilities on collaborative assembly tasks for both expert and non-expert users. The research on robot motion planning and control in the dissertation facilitates the safety and human trust in industrial robots in HRC
    corecore