7,346 research outputs found

    Human-Robot Collaboration in Automotive Assembly

    Get PDF
    In the past decades, automation in the automobile production line has significantly increased the efficiency and quality of automotive manufacturing. However, in the automotive assembly stage, most tasks are still accomplished manually by human workers because of the complexity and flexibility of the tasks and the high dynamic unconstructed workspace. This dissertation is proposed to improve the level of automation in automotive assembly by human-robot collaboration (HRC). The challenges that eluded the automation in automotive assembly including lack of suitable collaborative robotic systems for the HRC, especially the compact-size high-payload mobile manipulators; teaching and learning frameworks to enable robots to learn the assembly tasks, and how to assist humans to accomplish assembly tasks from human demonstration; task-driving high-level robot motion planning framework to make the trained robot intelligently and adaptively assist human in automotive assembly tasks. The technical research toward this goal has resulted in several peer-reviewed publications. Achievements include: 1) A novel collaborative lift-assist robot for automotive assembly; 2) Approaches of vision-based robot learning of placing tasks from human demonstrations in assembly; 3) Robot learning of assembly tasks and assistance from human demonstrations using Convolutional Neural Network (CNN); 4) Robot learning of assembly tasks and assistance from human demonstrations using Task Constraint-Guided Inverse Reinforcement Learning (TC-IRL); 5) Robot learning of assembly tasks from non-expert demonstrations via Functional Objective-Oriented Network (FOON); 6) Multi-model sampling-based motion planning for trajectory optimization with execution consistency in manufacturing contexts. The research demonstrates the feasibility of a parallel mobile manipulator, which introduces novel conceptions to industrial mobile manipulators for smart manufacturing. By exploring the Robot Learning from Demonstration (RLfD) with both AI-based and model-based approaches, the research also improves robots’ learning capabilities on collaborative assembly tasks for both expert and non-expert users. The research on robot motion planning and control in the dissertation facilitates the safety and human trust in industrial robots in HRC

    I Can See Your Aim: Estimating User Attention From Gaze For Handheld Robot Collaboration

    Get PDF
    This paper explores the estimation of user attention in the setting of a cooperative handheld robot: a robot designed to behave as a handheld tool but that has levels of task knowledge. We use a tool-mounted gaze tracking system, which, after modelling via a pilot study, we use as a proxy for estimating the attention of the user. This information is then used for cooperation with users in a task of selecting and engaging with objects on a dynamic screen. Via a video game setup, we test various degrees of robot autonomy from fully autonomous, where the robot knows what it has to do and acts, to no autonomy where the user is in full control of the task. Our results measure performance and subjective metrics and show how the attention model benefits the interaction and preference of users.Comment: this is a corrected version of the one that was published at IROS 201

    Adapting to Human Preferences to Lead or Follow in Human-Robot Collaboration: A System Evaluation

    Full text link
    With the introduction of collaborative robots, humans and robots can now work together in close proximity and share the same workspace. However, this collaboration presents various challenges that need to be addressed to ensure seamless cooperation between the agents. This paper focuses on task planning for human-robot collaboration, taking into account the human's performance and their preference for following or leading. Unlike conventional task allocation methods, the proposed system allows both the robot and human to select and assign tasks to each other. Our previous studies evaluated the proposed framework in a computer simulation environment. This paper extends the research by implementing the algorithm in a real scenario where a human collaborates with a Fetch mobile manipulator robot. We briefly describe the experimental setup, procedure and implementation of the planned user study. As a first step, in this paper, we report on a system evaluation study where the experimenter enacted different possible behaviours in terms of leader/follower preferences that can occur in a user study. Results show that the robot can adapt and respond appropriately to different human agent behaviours, enacted by the experimenter. A future user study will evaluate the system with human participants
    corecore