55 research outputs found
A Discrete-Time Algorithm for Stiffness Extraction from sEMG and Its Application in Antidisturbance Teleoperation
Ā© 2016 Peidong Liang et al. We have developed a new discrete-time algorithm of stiffness extraction from muscle surface electromyography (sEMG) collected from human operator's arms and have applied it for antidisturbance control in robot teleoperation. The variation of arm stiffness is estimated from sEMG signals and transferred to a telerobot under variable impedance control to imitate human motor control behaviours, particularly for disturbance attenuation. In comparison to the estimation of stiffness from sEMG, the proposed algorithm is able to reduce the nonlinear residual error effect and to enhance robustness and to simplify stiffness calibration. In order to extract a smoothing stiffness enveloping from sEMG signals, two enveloping methods are employed in this paper, namely, fast linear enveloping based on low pass filtering and moving average and amplitude monocomponent and frequency modulating (AM-FM) method. Both methods have been incorporated into the proposed stiffness variance estimation algorithm and are extensively tested. The test results show that stiffness variation extraction based on the two methods is sensitive and robust to attenuation disturbance. It could potentially be applied for teleoperation in the presence of hazardous surroundings or human robot physical cooperation scenarios
User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction
The robotic technologies have been well developed recently in various ļ¬elds, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions eļ¬ectively still remains un-resolved. The current artiļ¬cial intelligence (AI) technology does not support robots to fulļ¬l complex tasks without humanās guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research ļ¬elds. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiļ¬ness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots oļ¬ering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo cameraās view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning eļ¬ciency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humansā upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robotās perspective.Comparative experiments have been performed to demonstrate the eļ¬ectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design
Learning Algorithm Design for Human-Robot Skill Transfer
In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operatorās head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operatorās arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operatorās arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character
RoboHive: A Unified Framework for Robot Learning
We present RoboHive, a comprehensive software platform and ecosystem for
research in the field of Robot Learning and Embodied Artificial Intelligence.
Our platform encompasses a diverse range of pre-existing and novel
environments, including dexterous manipulation with the Shadow Hand, whole-arm
manipulation tasks with Franka and Fetch robots, quadruped locomotion, among
others. Included environments are organized within and cover multiple domains
such as hand manipulation, locomotion, multi-task, multi-agent, muscles, etc.
In comparison to prior works, RoboHive offers a streamlined and unified task
interface taking dependency on only a minimal set of well-maintained packages,
features tasks with high physics fidelity and rich visual diversity, and
supports common hardware drivers for real-world deployment. The unified
interface of RoboHive offers a convenient and accessible abstraction for
algorithmic research in imitation, reinforcement, multi-task, and hierarchical
learning. Furthermore, RoboHive includes expert demonstrations and baseline
results for most environments, providing a standard for benchmarking and
comparisons. Details: https://sites.google.com/view/robohiveComment: Accepted at 37th Conference on Neural Information Processing Systems
(NeurIPS 2023) Track on Datasets and Benchmark
Fused mechanomyography and inertial measurement for human-robot interface
Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion.
Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time.
This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled.
Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification.
It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference.
Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment.
Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues.
There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces
Recommended from our members
Formulation of a new gradient descent MARG orientation algorithm: case study on robot teleoperation
We introduce a novel magnetic angular rate gravity (MARG) sensor fusion algorithm for inertial measurement. The new algorithm improves the popular gradient descent (Ź»Madgwickā) algorithm increasing accuracy and robustness while preserving computa- tional efficiency. Analytic and experimental results demonstrate faster convergence for multiple variations of the algorithm through changing magnetic inclination. Furthermore, decoupling of magnetic field variance from roll and pitch estimation is pro- ven for enhanced robustness. The algorithm is validated in a human-machine interface (HMI) case study. The case study involves hardware implementation for wearable robot teleoperation in both Virtual Reality (VR) and in real-time on a 14 degree-of-freedom (DoF) humanoid robot. The experiment fuses inertial (movement) and mechanomyography (MMG) muscle sensing to control robot arm movement and grasp simultaneously, demon- strating algorithm efficacy and capacity to interface with other physiological sensors. To our knowledge, this is the first such formulation and the first fusion of inertial measure- ment and MMG in HMI. We believe the new algorithm holds the potential to impact a very wide range of inertial measurement applications where full orientation necessary. Physiological sensor synthesis and hardware interface further provides a foundation for robotic teleoperation systems with necessary robustness for use in the field
Control System for 3D Printable Robotic Hand
Humanoid robotics is a growing area of research due to its potential applications in orthosis and prosthesis for human beings. With the currently available technologies, the most advanced robotic hands used in prosthetics or robotics can cost from 90,000, making it inaccessible to the general population of amputees and robotics hobbyists. Most of the features provided by these expensive technologies are superfluous to many users, creating a great gap in cost and services between users and technology. Using the emerging 3D printing technology, my project is to construct a 3D printed robotic hand that can reproduce as many basic functionalities of the advanced expensive hands, while minimizing the cost. The project involves choosing a feasible 3D printed design plan, assembly of the mechanical and electrical components of the robotic hand, the design and implementation of the software interface for intuitive user control of the hand and ease of integrability to existing robotic systems. This new hand will allow mimicking, versatile gripping, human-recognizable gestures, feedback controlled force exertion, and a ROS integrated software interface. This project will further allow students at Union to extend their research in social robotics and human-computer interface by incorporating the inexpensive robotic han
Recommended from our members
Intuitive Human-Machine Interfaces for Non-Anthropomorphic Robotic Hands
As robots become more prevalent in our everyday lives, both in our workplaces and in our homes, it becomes increasingly likely that people who are not experts in robotics will be asked to interface with robotic devices. It is therefore important to develop robotic controls that are intuitive and easy for novices to use. Robotic hands, in particular, are very useful, but their high dimensionality makes creating intuitive human-machine interfaces for them complex. In this dissertation, we study the control of non-anthropomorphic robotic hands by non-roboticists in two contexts: collaborative manipulation and assistive robotics.
In the field of collaborative manipulation, the human and the robot work side by side as independent agents. Teleoperation allows the human to assist the robot when autonomous grasping is not able to deal sufficiently well with corner cases or cannot operate fast enough. Using the teleoperatorās hand as an input device can provide an intuitive control method, but finding a mapping between a human hand and a non-anthropomorphic robot hand can be difficult, due to the handsā dissimilar kinematics. In this dissertation, we seek to create a mapping between the human hand and a fully actuated, non-anthropomorphic robot hand that is intuitive enough to enable effective real-time teleoperation, even for novice users.
We propose a low-dimensional and continuous teleoperation subspace which can be used as an intermediary for mapping between different hand pose spaces. We first propose the general concept of the subspace, its properties and the variables needed to map from the human hand to a robot hand. We then propose three ways to populate the teleoperation subspace mapping. Two of our mappings use a dataglove to harvest information about the user's hand. We define the mapping between joint space and teleoperation subspace with an empirical definition, which requires a person to define hand motions in an intuitive, hand-specific way, and with an algorithmic definition, which is kinematically independent, and uses objects to define the subspace. Our third mapping for the teleoperation subspace uses forearm electromyography (EMG) as a control input.
Assistive orthotics is another area of robotics where human-machine interfaces are critical, since, in this field, the robot is attached to the hand of the human user. In this case, the goal is for the robot to assist the human with movements they would not otherwise be able to achieve. Orthotics can improve the quality of life of people who do not have full use of their hands. Human-machine interfaces for assistive hand orthotics that use EMG signals from the affected forearm as input are intuitive and repeated use can strengthen the muscles of the user's affected arm. In this dissertation, we seek to create an EMG based control for an orthotic device used by people who have had a stroke. We would like our control to enable functional motions when used in conjunction with a orthosis and to be robust to changes in the input signal.
We propose a control for a wearable hand orthosis which uses an easy to don, commodity forearm EMG band. We develop an supervised algorithm to detect a userās intent to open and close their hand, and pair this algorithm with a training protocol which makes our intent detection robust to changes in the input signal. We show that this algorithm, when used in conjunction with an orthosis over several weeks, can improve distal function in users. Additionally, we propose two semi-supervised intent detection algorithms designed to keep our control robust to changes in the input data while reducing the length and frequency of our training protocol
Collaborative human-machine interfaces for mobile manipulators.
The use of mobile manipulators in service industries as both agents in physical Human Robot Interaction (pHRI) and for social interactions has been on the increase in recent times due to necessities like compensating for workforce shortages and enabling safer and more efficient operations amongst other reasons. Collaborative robots, or co-bots, are robots that are developed for use with human interaction through direct contact or close proximity in a shared space with the human users. The work presented in this dissertation focuses on the design, implementation and analysis of components for the next-generation collaborative human machine interfaces (CHMI) needed for mobile manipulator co-bots that can be used in various service industries. The particular components of these CHMI\u27s that are considered in this dissertation include: Robot Control: A Neuroadaptive Controller (NAC)-based admittance control strategy for pHRI applications with a co-bot. Robot state estimation: A novel methodology and placement strategy for using arrays of IMUs that can be embedded in robot skin for pose estimation in complex robot mechanisms. User perception of co-bot CHMI\u27s: Evaluation of human perceptions of usefulness and ease of use of a mobile manipulator co-bot in a nursing assistant application scenario. To facilitate advanced control for the Adaptive Robotic Nursing Assistant (ARNA) mobile manipulator co-bot that was designed and developed in our lab, we describe and evaluate an admittance control strategy that features a Neuroadaptive Controller (NAC). The NAC has been specifically formulated for pHRI applications such as patient walking. The controller continuously tunes weights of a neural network to cancel robot non-linearities, including drive train backlash, kinematic or dynamic coupling, variable patient pushing effort, or slope surfaces with unknown inclines. The advantage of our control strategy consists of Lyapunov stability guarantees during interaction, less need for parameter tuning and better performance across a variety of users and operating conditions. We conduct simulations and experiments with 10 users to confirm that the NAC outperforms a classic Proportional-Derivative (PD) joint controller in terms of resulting interaction jerk, user effort, and trajectory tracking error during patient walking. To tackle complex mechanisms of these next-gen robots wherein the use of encoder or other classic pose measuring device is not feasible, we present a study effects of design parameters on methods that use data from Inertial Measurement Units (IMU) in robot skins to provide robot state estimates. These parameters include number of sensors, their placement on the robot, as well as noise properties on the quality of robot pose estimation and its signal-to-noise Ratio (SNR). The results from that study facilitate the creation of robot skin, and in order to enable their use in complex robots, we propose a novel pose estimation method, the Generalized Common Mode Rejection (GCMR) algorithm, for estimation of joint angles in robot chains containing composite joints. The placement study and GCMR are demonstrated using both Gazebo simulation and experiments with a 3-DoF robotic arm containing 2 non-zero link lengths, 1 revolute joint and a 2-DoF composite joint. In addition to yielding insights on the predicted usage of co-bots, the design of control and sensing mechanisms in their CHMI benefits from evaluating the perception of the eventual users of these robots. With co-bots being only increasingly developed and used, there is a need for studies into these user perceptions using existing models that have been used in predicting usage of comparable technology. To this end, we use the Technology Acceptance Model (TAM) to evaluate the CHMI of the ARNA robot in a scenario via analysis of quantitative and questionnaire data collected during experiments with eventual uses. The results from the works conducted in this dissertation demonstrate insightful contributions to the realization of control and sensing systems that are part of CHMI\u27s for next generation co-bots
Neural learning enhanced variable admittance control for human-robot collaboration
Ā© 2013 IEEE. In this paper, we propose a novel strategy for human-robot impedance mapping to realize an effective execution of human-robot collaboration. The endpoint stiffness of the human arm impedance is estimated according to the configurations of the human arm and the muscle activation levels of the upper arm. Inspired by the human adaptability in collaboration, a smooth stiffness mapping between the human arm endpoint and the robot arm joint is developed to inherit the human arm characteristics. The estimation of stiffness term is generalized to full impedance by additionally considering the damping and mass terms. Once the human arm impedance estimation is completed, a Linear Quadratic Regulator is employed for the calculation of the corresponding robot arm admittance model to match the estimated impedance parameters of the human arm. Under the variable admittance control, robot arm is governed to be complaint to the human arm impedance and the interaction force exerted by the human arm endpoint, thus the relatively optimal collaboration can be achieved. The radial basis function neural network is employed to compensate for the unknown dynamics to guarantee the performance of the controller. Comparative experiments have been conducted to verify the validity of the proposed technique
- ā¦