6,487 research outputs found

    Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models

    Full text link
    Recent progress in human-robot collaboration makes fast and fluid interactions possible, even when human observations are partial and occluded. Methods like Interaction Probabilistic Movement Primitives (ProMP) model human trajectories through motion capture systems. However, such representation does not properly model tasks where similar motions handle different objects. Under current approaches, a robot would not adapt its pose and dynamics for proper handling. We integrate the use of Electromyography (EMG) into the Interaction ProMP framework and utilize muscular signals to augment the human observation representation. The contribution of our paper is increased task discernment when trajectories are similar but tools are different and require the robot to adjust its pose for proper handling. Interaction ProMPs are used with an augmented vector that integrates muscle activity. Augmented time-normalized trajectories are used in training to learn correlation parameters and robot motions are predicted by finding the best weight combination and temporal scaling for a task. Collaborative single task scenarios with similar motions but different objects were used and compared. For one experiment only joint angles were recorded, for the other EMG signals were additionally integrated. Task recognition was computed for both tasks. Observation state vectors with augmented EMG signals were able to completely identify differences across tasks, while the baseline method failed every time. Integrating EMG signals into collaborative tasks significantly increases the ability of the system to recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in our studies. Furthermore, the integration of EMG signals for collaboration also opens the door to a wide class of human-robot physical interactions based on haptic communication that has been largely unexploited in the field.Comment: 7 pages, 2 figures, 2 tables. As submitted to Humanoids 201

    a human in the loop cyber physical system for collaborative assembly in smart manufacturing

    Get PDF
    Abstract Industry 4.0 rose with the introduction of cyber-physical systems (CPS) and Internet of things (IoT) inside manufacturing systems. CPS represent self-controlled physical processes, having tight networking capabilities and efficient interfaces for human interaction. The interactive dimension of CPS reaches its maximum when defined in terms of natural human-machine interfaces (NHMI), i.e., those reducing the technological barriers required for the interaction. This paper presents a NHMI bringing the human decision-making capabilities inside the cybernetic control loop of a smart manufacturing assembly system. The interface allows to control, coordinate and cooperate with an industrial cobot during the task execution

    Opportunities for using eye tracking technology in manufacturing and logistics: Systematic literature review and research agenda

    Get PDF
    Workers play essential roles in manufacturing and logistics. Releasing workers from routine tasks and enabling them to focus on creative, value-adding activities can enhance their performance and wellbeing, and it is also key to the successful implementation of Industry 4.0. One technology that can help identify patterns of worker-system interaction is Eye Tracking (ET), which is a non-intrusive technology for measuring human eye movements. ET can provide moment-by-moment insights into the cognitive state of the subject during task execution, which can improve our understanding of how humans behave and make decisions within complex systems. It also enables explorations of the subject’s interaction mode with the working environment. Earlier research has investigated the use of ET in manufacturing and logistics, but the literature is fragmented and has not yet been discussed in a literature review yet. This article therefore conducts a systematic literature review to explore the applications of ET, summarise its benefits, and outline future research opportunities of using ET in manufacturing and logistics. We first propose a conceptual framework to guide our study and then conduct a systematic literature search in scholarly databases, obtaining 71 relevant papers. Building on the proposed framework, we systematically review the use of ET and categorize the identified papers according to their application in manufacturing (product development, production, quality inspection) and logistics. Our results reveal that ET has several use cases in the manufacturing sector, but that its application in logistics has not been studied extensively so far. We summarize the benefits of using ET in terms of process performance, human performance, and work environment and safety, and also discuss the methodological characteristics of the ET literature as well as typical ET measures used. We conclude by illustrating future avenues for ET research in manufacturing and logistics

    Multimodal sensor-based human-robot collaboration in assembly tasks

    Get PDF
    This work presents a framework for Human-Robot Collaboration (HRC) in assembly tasks that uses multimodal sensors, perception and control methods. First, vision sensing is employed for user identification to determine the collaborative task to be performed. Second, assembly actions and hand gestures are recognised using wearable inertial measurement units (IMUs) and convolutional neural networks (CNN) to identify when robot collaboration is needed and bring the next object to the user for assembly. If collaboration is not required, then the robot performs a solo task. Third, the robot arm uses time domain features from tactile sensors to detect when an object has been touched and grasped for handover actions in the assembly process. These multimodal sensors and computational modules are integrated in a layered control architecture for HRC collaborative assembly tasks. The proposed framework is validated in real-time using a Universal Robot arm (UR3) to collaborate with humans for assembling two types of objects 1) a box and 2) a small chair, and to work on a solo task of moving a stack of Lego blocks when collaboration with the user is not needed. The experiments show that the robot is capable of sensing and perceiving the state of the surrounding environment using multimodal sensors and computational methods to act and collaborate with humans to complete assembly tasks successfully.</p
    corecore