3,422 research outputs found
Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models
Recent progress in human-robot collaboration makes fast and fluid
interactions possible, even when human observations are partial and occluded.
Methods like Interaction Probabilistic Movement Primitives (ProMP) model human
trajectories through motion capture systems. However, such representation does
not properly model tasks where similar motions handle different objects. Under
current approaches, a robot would not adapt its pose and dynamics for proper
handling. We integrate the use of Electromyography (EMG) into the Interaction
ProMP framework and utilize muscular signals to augment the human observation
representation. The contribution of our paper is increased task discernment
when trajectories are similar but tools are different and require the robot to
adjust its pose for proper handling. Interaction ProMPs are used with an
augmented vector that integrates muscle activity. Augmented time-normalized
trajectories are used in training to learn correlation parameters and robot
motions are predicted by finding the best weight combination and temporal
scaling for a task. Collaborative single task scenarios with similar motions
but different objects were used and compared. For one experiment only joint
angles were recorded, for the other EMG signals were additionally integrated.
Task recognition was computed for both tasks. Observation state vectors with
augmented EMG signals were able to completely identify differences across
tasks, while the baseline method failed every time. Integrating EMG signals
into collaborative tasks significantly increases the ability of the system to
recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in
our studies. Furthermore, the integration of EMG signals for collaboration also
opens the door to a wide class of human-robot physical interactions based on
haptic communication that has been largely unexploited in the field.Comment: 7 pages, 2 figures, 2 tables. As submitted to Humanoids 201
Human-Robot Collaboration in Automotive Assembly
In the past decades, automation in the automobile production line has significantly increased the efficiency and quality of automotive manufacturing. However, in the automotive assembly stage, most tasks are still accomplished manually by human workers because of the complexity and flexibility of the tasks and the high dynamic unconstructed workspace. This dissertation is proposed to improve the level of automation in automotive assembly by human-robot collaboration (HRC). The challenges that eluded the automation in automotive assembly including lack of suitable collaborative robotic systems for the HRC, especially the compact-size high-payload mobile manipulators; teaching and learning frameworks to enable robots to learn the assembly tasks, and how to assist humans to accomplish assembly tasks from human demonstration; task-driving high-level robot motion planning framework to make the trained robot intelligently and adaptively assist human in automotive assembly tasks. The technical research toward this goal has resulted in several peer-reviewed publications. Achievements include: 1) A novel collaborative lift-assist robot for automotive assembly; 2) Approaches of vision-based robot learning of placing tasks from human demonstrations in assembly; 3) Robot learning of assembly tasks and assistance from human demonstrations using Convolutional Neural Network (CNN); 4) Robot learning of assembly tasks and assistance from human demonstrations using Task Constraint-Guided Inverse Reinforcement Learning (TC-IRL); 5) Robot learning of assembly tasks from non-expert demonstrations via Functional Objective-Oriented Network (FOON); 6) Multi-model sampling-based motion planning for trajectory optimization with execution consistency in manufacturing contexts. The research demonstrates the feasibility of a parallel mobile manipulator, which introduces novel conceptions to industrial mobile manipulators for smart manufacturing. By exploring the Robot Learning from Demonstration (RLfD) with both AI-based and model-based approaches, the research also improves robots’ learning capabilities on collaborative assembly tasks for both expert and non-expert users. The research on robot motion planning and control in the dissertation facilitates the safety and human trust in industrial robots in HRC
Recommended from our members
Iterative learning of human partner's desired trajectory for proactive human-robot collaboration
A period-varying iterative learning control scheme is proposed for a robotic manipulator to learn a target trajectory that is planned by a human partner but unknown to the robot, which is a typical scenario in many applications. The proposed method updates the robot’s reference trajectory in an iterative manner to minimize the interaction force applied by the human. Although a repetitive human–robot collaboration task is considered, the task period is subject to uncertainty introduced by the human. To address this issue, a novel learning mechanism is proposed to achieve the control objective. Theoretical analysis is performed to prove the performance of the learning algorithm and robot controller. Selective simulations and experiments on a robotic arm are carried out to show the effectiveness of the proposed method in human–robot collaboration
Human-robot co-carrying using visual and force sensing
In this paper, we propose a hybrid framework using visual and force sensing for human-robot co-carrying tasks. Visual sensing is utilized to obtain human motion and an observer is designed for estimating control input of human, which generates robot's desired motion towards human's intended motion. An adaptive impedance-based control strategy is proposed for trajectory tracking with neural networks (NNs) used to compensate for uncertainties in robot's dynamics. Motion synchronization is achieved and this approach yields a stable and efficient interaction behavior between human and robot, decreases human control effort and avoids interference to human during the interaction. The proposed framework is validated by a co-carrying task in simulations and experiments
Trust in Robots
Robots are increasingly becoming prevalent in our daily lives within our living or working spaces. We hope that robots will take up tedious, mundane or dirty chores and make our lives more comfortable, easy and enjoyable by providing companionship and care. However, robots may pose a threat to human privacy, safety and autonomy; therefore, it is necessary to have constant control over the developing technology to ensure the benevolent intentions and safety of autonomous systems. Building trust in (autonomous) robotic systems is thus necessary. The title of this book highlights this challenge: “Trust in robots—Trusting robots”. Herein, various notions and research areas associated with robots are unified. The theme “Trust in robots” addresses the development of technology that is trustworthy for users; “Trusting robots” focuses on building a trusting relationship with robots, furthering previous research. These themes and topics are at the core of the PhD program “Trust Robots” at TU Wien, Austria
Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications
L'abstract è presente nell'allegato / the abstract is in the attachmen
Human-Robot Collaboration for Kinesthetic Teaching
Recent industrial interest in producing smaller volumes of products in shorter time frames, in contrast to mass production in previous decades, motivated the introduction of human–robot collaboration (HRC) in industrial settings, as an attempt to increase flexibility in manufacturing applications by incorporating human intelligence and dexterity to these processes. This thesis presents methods for improving the involvement of human operators in industrial settings where robots are present, with a particular focus on kinesthetic teaching, i.e., manually guiding the robot to define or correct its motion, since it can facilitate non-expert robot programming.To increase flexibility in the manufacturing industry implies a loss of a fixed structure of the industrial environment, which increases the uncertainties in the shared workspace between humans and robots. Two methods have been proposed in this thesis to mitigate such uncertainty. First, null-space motion was used to increase the accuracy of kinesthetic teaching by reducing the joint static friction, or stiction, without altering the execution of the robotic task. This was possible since robots used in HRC, i.e., collaborative robots, are often designed with additional degrees of freedom (DOFs) for a greater dexterity. Second, to perform effective corrections of the motion of the robot through kinesthetic teaching in partially-unknown industrial environments, a fast identification of the source of robot–environment contact is necessary. Fast contact detection and classification methods in literature were evaluated, extended, and modified to use them in kinesthetic teaching applications for an assembly task. For this, collaborative robots that are made compliant with respect to their external forces/torques (as an active safety mechanism) were used, and only embedded sensors of the robot were considered.Moreover, safety is a major concern when robotic motion occurs in an inherently uncertain scenario, especially if humans are present. Therefore, an online variation of the compliant behavior of the robot during its manual guidance by a human operator was proposed to avoid undesired parts of the workspace of the robot. The proposed method used safety control barrier functions (SCBFs) that considered the rigid-body dynamics of the robot, and the method’s stability was guaranteed using a passivity-based energy-storage formulation that includes a strict Lyapunov function.All presented methods were tested experimentally on a real collaborative robot
Learning Controllers for Reactive and Proactive Behaviors in Human–Robot Collaboration
Designed to safely share the same workspace as humans and assist them in a variety of tasks, the new collaborative robots are targeting manufacturing and service applications that once were considered unattainable. The large diversity of tasks to carry out, the unstructured environments and the close interaction with humans call for collaborative robots to seamlessly adapt their behaviors so as to cooperate with the users successfully under different and possibly new situations (characterized, for example, by positions of objects/landmarks in the environment, or by the user pose). This paper investigates how controllers capable of reactive and proactive behaviors in collaborative tasks can be learned from demonstrations. The proposed approach exploits the temporal coherence and dynamic characteristics of the task observed during the training phase to build a probabilistic model that enables the robot to both react to the user actions and lead the task when needed. The method is an extension of the Hidden Semi-Markov Model where the duration probability distribution is adapted according to the interaction with the user. This Adaptive Duration Hidden Semi-Markov Model (ADHSMM) is used to retrieve a sequence of states governing a trajectory optimization that provides the reference and gain matrices to the robot controller. A proof-of-concept evaluation is first carried out in a pouring task. The proposed framework is then tested in a collaborative task using a 7 DOF backdrivable manipulator
- …