279 research outputs found

    Learning Algorithm Design for Human-Robot Skill Transfer

    Get PDF
    In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character

    Teleoperation control of Baxter robot using Kalman filter-based sensor fusion

    Get PDF
    Kalman filter has been successfully applied to fuse the motion capture data collected from Kinect sensor and a pair of MYO armbands to teleoperate a robot. A new strategy utilizing the vector approach has been developed to accomplish a specific motion capture task. The arm motion of the operator is captured by a Kinect sensor and programmed with Processing software. Two MYO armbands with the inertial measurement unit embedded are worn on the operator's arm, which is used to detect the upper arm motion of the human operator. This is utilized to recognize and to calculate the precise speed of the physical motion of the operator's arm. User Datagram Protocol is employed to send the human movement to a simulated Baxter robot arm for teleoperation. In order to obtain joint angles for human limb utilizing vector approach, RosPy and Python script programming has been utilized. A series of experiments have been conducted to test the performance of the proposed technique, which provides the basis for the teleoperation of simulated Baxter robot

    Motion and emotion estimation for robotic autism intervention.

    Get PDF
    Robots have recently emerged as a novel approach to treating autism spectrum disorder (ASD). A robot can be programmed to interact with children with ASD in order to reinforce positive social skills in a non-threatening environment. In prior work, robots were employed in interaction sessions with ASD children, but their sensory and learning abilities were limited, while a human therapist was heavily involved in “puppeteering” the robot. The objective of this work is to create the next-generation autism robot that includes several new interactive and decision-making capabilities that are not found in prior technology. Two of the main features that this robot would need to have is the ability to quantitatively estimate the patient’s motion performance and to correctly classify their emotions. This would allow for the potential diagnosis of autism and the ability to help autistic patients practice their skills. Therefore, in this thesis, we engineered components for a human-robot interaction system and confirmed them in experiments with the robots Baxter and Zeno, the sensors Empatica E4 and Kinect, and, finally, the open-source pose estimation software OpenPose. The Empatica E4 wristband is a wearable device that collects physiological measurements in real time from a test subject. Measurements were collected from ASD patients during human-robot interaction activities. Using this data and labels of attentiveness from a trained coder, a classifier was developed that provides a prediction of the patient’s level of engagement. The classifier outputs this prediction to a robot or supervising adult, allowing for decisions during intervention activities to keep the attention of the patient with autism. The CMU Perceptual Computing Lab’s OpenPose software package enables body, face, and hand tracking using an RGB camera (e.g., web camera) or an RGB-D camera (e.g., Microsoft Kinect). Integrating OpenPose with a robot allows the robot to collect information on user motion intent and perform motion imitation. In this work, we developed such a teleoperation interface with the Baxter robot. Finally, a novel algorithm, called Segment-based Online Dynamic Time Warping (SoDTW), and metric are proposed to help in the diagnosis of ASD. Social Robot Zeno, a childlike robot developed by Hanson Robotics, was used to test this algorithm and metric. Using the proposed algorithm, it is possible to classify a subject’s motion into different speeds or to use the resulting SoDTW score to evaluate the subject’s abilities

    TELESIM: A Modular and Plug-and-Play Framework for Robotic Arm Teleoperation using a Digital Twin

    Full text link
    We present TELESIM, a modular and plug-and-play framework for direct teleoperation of a robotic arm using a digital twin as the interface between the user and the robotic system. We tested TELESIM by performing a user survey with 37 participants on two different robots using two different control modalities: a virtual reality controller and a finger mapping hardware controller using different grasping systems. Users were asked to teleoperate the robot to pick and place 3 cubes in a tower and to repeat this task as many times as possible in 10 minutes, with only 5 minutes of training beforehand. Our experimental results show that most users were able to succeed by building at least a tower of 3 cubes regardless of the control modality or robot used, demonstrating the user-friendliness of TELESIM

    Formulation of a new gradient descent MARG orientation algorithm: case study on robot teleoperation

    Get PDF
    We introduce a novel magnetic angular rate gravity (MARG) sensor fusion algorithm for inertial measurement. The new algorithm improves the popular gradient descent (ʻMadgwick’) algorithm increasing accuracy and robustness while preserving computa- tional efficiency. Analytic and experimental results demonstrate faster convergence for multiple variations of the algorithm through changing magnetic inclination. Furthermore, decoupling of magnetic field variance from roll and pitch estimation is pro- ven for enhanced robustness. The algorithm is validated in a human-machine interface (HMI) case study. The case study involves hardware implementation for wearable robot teleoperation in both Virtual Reality (VR) and in real-time on a 14 degree-of-freedom (DoF) humanoid robot. The experiment fuses inertial (movement) and mechanomyography (MMG) muscle sensing to control robot arm movement and grasp simultaneously, demon- strating algorithm efficacy and capacity to interface with other physiological sensors. To our knowledge, this is the first such formulation and the first fusion of inertial measure- ment and MMG in HMI. We believe the new algorithm holds the potential to impact a very wide range of inertial measurement applications where full orientation necessary. Physiological sensor synthesis and hardware interface further provides a foundation for robotic teleoperation systems with necessary robustness for use in the field

    Development of a user experience enhanced teleoperation approach

    Get PDF
    In this paper, we have investigated various techniques that can be used to enhance user experience for robot teleoperation. In our teleoperation system design, the human operator are provided with both immersive visual feedback and intuitive skill transfer interface such that when controlling a telerobot arm, a user is able to feeļ in a first person perspective in terms of both visual and haptic sense. A number of high-tech devices including Omni haptic joystick, MYO armband, Oculus Rift DK2 headset, and Kinect v2 camera are integrated. The surface electromyography (sEMG) signal allows operator to naturally and efficiently transfer his/her motion skill to the robot, based on the properly designed elastic force feedback. For visual feedback, operators can control the pose of a camera on the head of the robot via the wearable visual headset, such that the operator is able to perceive from the roboţs perspective. Extensive tests have been performed with human subjects to evaluate the design, and the experimental results have shown that superior performance and better user experience have been achieved by the proposed method in comparison with the traditional methods

    Disturbance observer enhanced variable gain controller for robot teleoperation with motion capture using wearable armbands

    Get PDF
    Disturbance observer (DOB) based controller performs well in estimating and compensating for perturbation when the external or internal unknown disturbance is slowly time varying. However, to some extent, robot manipulators usually work in complex environment with high-frequency disturbance. Thereby, to enhance tracking performance in a teleoperation system, only traditional DOB technique is insufficient. In this paper, for the purpose of constructing a feasible teleoperation scheme, we develop a novel controller that contains a variable gain scheme to deal with fast-time varying perturbation, whose gain is adjusted linearly according to human surface electromyographic signals collected from Myo wearable armband. In addition, for tracking the motion of operator’s arm, we derive five-joint-angle data of a moving human arm through two groups of quaternions generated from the armbands. Besides, the radial basis function neural networks and the disturbance observer-based control (DOBC) approaches are fused together into the proposed controller to compensate the unknown dynamics uncertainties of the slave robot as well as environmental perturbation. Experiments and simulations are conducted to demonstrated the effectiveness of the proposed strategy
    corecore