12 research outputs found

    Teleoperation control of Baxter robot using Kalman filter-based sensor fusion

    Get PDF
    Kalman filter has been successfully applied to fuse the motion capture data collected from Kinect sensor and a pair of MYO armbands to teleoperate a robot. A new strategy utilizing the vector approach has been developed to accomplish a specific motion capture task. The arm motion of the operator is captured by a Kinect sensor and programmed with Processing software. Two MYO armbands with the inertial measurement unit embedded are worn on the operator's arm, which is used to detect the upper arm motion of the human operator. This is utilized to recognize and to calculate the precise speed of the physical motion of the operator's arm. User Datagram Protocol is employed to send the human movement to a simulated Baxter robot arm for teleoperation. In order to obtain joint angles for human limb utilizing vector approach, RosPy and Python script programming has been utilized. A series of experiments have been conducted to test the performance of the proposed technique, which provides the basis for the teleoperation of simulated Baxter robot

    Learning Algorithm Design for Human-Robot Skill Transfer

    Get PDF
    In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character

    User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction

    Get PDF
    The robotic technologies have been well developed recently in various fields, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions effectively still remains un-resolved. The current artificial intelligence (AI) technology does not support robots to fulfil complex tasks without human’s guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research fields. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiffness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots offering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo camera’s view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning efficiency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humans’ upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robot’s perspective.Comparative experiments have been performed to demonstrate the effectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design

    An improvement of robot stiffness-adaptive skill primitive generalization using the surface electromyography in human–robot collaboration

    Get PDF
    Learning from Demonstration in robotics has proved its efficiency in robot skill learning. The generalization goals of most skill expression models in real scenarios are specified by humans or associated with other perceptual data. Our proposed framework using the Probabilistic Movement Primitives (ProMPs) modeling to resolve the shortcomings of the previous research works; the coupling between stiffness and motion is inherently established in a single model. Such a framework can request a small amount of incomplete observation data to infer the entire skill primitive. It can be used as an intuitive generalization command sending tool to achieve collaboration between humans and robots with human-like stiffness modulation strategies on either side. Experiments (human–robot hand-over, object matching, pick-and-place) were conducted to prove the effectiveness of the work. Myo armband and Leap motion camera are used as surface electromyography (sEMG) signal and motion capture sensors respective in the experiments. Also, the experiments show that the proposed framework strengthened the ability to distinguish actions with similar movements under observation noise by introducing the sEMG signal into the ProMP model. The usage of the mixture model brings possibilities in achieving automation of multiple collaborative tasks

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    Robot manipulator skill learning and generalising through teleoperation

    Get PDF
    Robot manipulators have been widely used for simple repetitive, and accurate tasks in industrial plants, such as pick and place, assembly and welding etc., but it is still hard to deploy in human-centred environments for dexterous manipulation tasks, such as medical examination and robot-assisted healthcare. These tasks are not only related to motion planning and control but also to the compliant interaction behaviour of robots, e.g. motion control, force regulation and impedance adaptation simultaneously under dynamic and unknown environments. Recently, with the development of collaborative robotics (cobots) and machine learning, robot skill learning and generalising have attained increasing attention from robotics, machine learning and neuroscience communities. Nevertheless, learning complex and compliant manipulation skills, such as manipulating deformable objects, scanning the human body and folding clothes, is still challenging for robots. On the other hand, teleoperation, also namely remote operation or telerobotics, has been an old research area since 1950, and there have been a number of applications such as space exploration, telemedicine, marine vehicles and emergency response etc. One of its advantages is to combine the precise control of robots with human intelligence to perform dexterous and safety-critical tasks from a distance. In addition, telepresence allows remote operators could feel the actual interaction between the robot and the environment, including the vision, sound and haptic feedback etc. Especially under the development of various augmented reality (AR), virtual reality (VR) and wearable devices, intuitive and immersive teleoperation have received increasing attention from robotics and computer science communities. Thus, various human-robot collaboration (HRC) interfaces based on the above technologies were developed to integrate robot control and telemanipulation by human operators for robot skills learning from human beings. In this context, robot skill learning could benefit teleoperation by automating repetitive and tedious tasks, and teleoperation demonstration and interaction by human teachers also allow the robot to learn progressively and interactively. Therefore, in this dissertation, we study human-robot skill transfer and generalising through intuitive teleoperation interfaces for contact-rich manipulation tasks, including medical examination, manipulating deformable objects, grasping soft objects and composite layup in manufacturing. The introduction, motivation and objectives of this thesis are introduced in Chapter 1. In Chapter 2, a literature review on manipulation skills acquisition through teleoperation is carried out, and the motivation and objectives of this thesis are discussed subsequently. Overall, the main contents of this thesis have three parts: Part 1 (Chapter 3) introduces the development and controller design of teleoperation systems with multimodal feedback, which is the foundation of this project for robot learning from human demonstration and interaction. In Part 2 (Chapters 4, 5, 6 and 7), we studied primitive skill library theory, behaviour tree-based modular method, and perception-enhanced method to improve the generalisation capability of learning from the human demonstrations. And several applications were employed to evaluate the effectiveness of these methods.In Part 3 (Chapter 8), we studied the deep multimodal neural networks to encode the manipulation skill, especially the multimodal perception information. This part conducted physical experiments on robot-assisted ultrasound scanning applications.Chapter 9 summarises the contributions and potential directions of this thesis. Keywords: Learning from demonstration; Teleoperation; Multimodal interface; Human-in-the-loop; Compliant control; Human-robot interaction; Robot-assisted sonography

    Machine Learning for Hand Gesture Classification from Surface Electromyography Signals

    Get PDF
    Classifying hand gestures from Surface Electromyography (sEMG) is a process which has applications in human-machine interaction, rehabilitation and prosthetic control. Reduction in the cost and increase in the availability of necessary hardware over recent years has made sEMG a more viable solution for hand gesture classification. The research challenge is the development of processes to robustly and accurately predict the current gesture based on incoming sEMG data. This thesis presents a set of methods, techniques and designs that improve upon evaluation of, and performance on, the classification problem as a whole. These are brought together to set a new baseline for the potential classification. Evaluation is improved by careful choice of metrics and design of cross-validation techniques that account for data bias caused by common experimental techniques. A landmark study is re-evaluated with these improved techniques, and it is shown that data augmentation can be used to significantly improve upon the performance using conventional classification methods. A novel neural network architecture and supporting improvements are presented that further improve performance and is refined such that the network can achieve similar performance with many fewer parameters than competing designs. Supporting techniques such as subject adaptation and smoothing algorithms are then explored to improve overall performance and also provide more nuanced trade-offs with various aspects of performance, such as incurred latency and prediction smoothness. A new study is presented which compares the performance potential of medical grade electrodes and a low-cost commercial alternative showing that for a modest-sized gesture set, they can compete. The data is also used to explore data labelling in experimental design and to evaluate the numerous aspects of performance that must be traded off

    Securing teleoperated robot: Classifying human operator identity and emotion through motion-controlled robotic behaviors

    Get PDF
    Teleoperated robotic systems allow human operators to control robots from a distance, which mitigates the constraints of physical distance between the operators and offers invaluable applications in the real world. However, the security of these systems is a critical concern. System attacks and the potential impact of operators’ inappropriate emotions can result in misbehavior of the remote robots, which poses risks to the remote environment. These concerns become particularly serious when performing mission-critical tasks, such as nuclear cleaning. This thesis explored innovative security methods for the teleoperated robotic system. Common methods of security that can be used for teleoperated robots include encryption, robot misbehavior detection and user authentication. However, they have limitations for teleoperated robot systems. Encryption adds communication overheads to the systems. Robot misbehavior detection can only detect unusual signals on robot devices. The user authentication method secured the system primarily at the access point. To address this, we built motioncontrolled robot platforms that allow for robot teleoperation and proposed methods of performing user classification directly on remote-controlled robotic behavioral data to enhance security integrity throughout the operation. We discussed in Chapter 3 and conducted 4 experiments. Experiments 1 and 2 demonstrated the effectiveness of our approach, achieving user classification accuracy of 95% and 93% on two platforms respectively, using motion-controlled robotic end-effector trajectories. The results in experiment 3 further indicated that control system performance directly impacts user classification efficacy. Additionally, we deployed an AI agent to protect user biometric identities, ensuring the robot’s actions do not compromise user privacy in the remote environment in experiment 4. This chapter provided a foundation of methodology and experiment design for the next work. Additionally, Operators’ emotions could pose a security threat to the robot system. A remote robot operator’s emotions can significantly impact the resulting robot’s motions leading to unexpected consequences, even when the user follows protocol and performs permitted tasks. The recognition of a user operator’s emotions in remote robot control scenarios is, however, under-explored. Emotion signals mainly are physiological signals, semantic information, facial expressions and bodily movements. However, most physiological signals are electrical signals and are vulnerable to motion artifacts, which can not acquire the accurate signal and is not suitable for teleoperated robot systems. Semantic information and facial expressions are sometimes not accessible and involve high privacy issues and add additional sensors to the teleoperated systems. We proposed the methods of emotion recognition through the motion-controlled robotic behaviors in Chapter 4. This work demonstrated for the first time that the motioncontrolled robotic arm can inherit human operators’ emotions and emotions can be classified through robotic end-effector trajectories, achieving an 83.3% accuracy. We developed two emotion recognition algorithms using Dynamic Time Warping (DTW) and Convolutional Neural Network (CNN), deriving unique emotional features from the avatar’s end-effector motions and joint spatial-temporal characteristics. Additionally, we demonstrated through direct comparison that our approach is more appropriate for motion-based telerobotic applications than traditional ECG-based methods. Furthermore, we discussed the implications of this system on prominent current and future remote robot operations and emotional robotic contexts. By integrating user classification and emotion recognition into teleoperated robotic systems, this thesis lays the groundwork for a new security paradigm that enhances both the safety of remote operations. Recognizing users and their emotions allows for more contextually appropriate robot responses, potentially preventing harm and improving the overall quality of teleoperated interactions. These advancements contribute significantly to the development of more adaptive, intuitive, and human-centered HRI applications, setting a precedent for future research in the field
    corecore