2,428 research outputs found

    Robot Programming by Demonstration: Trajectory Learning Enhanced by sEMG-Based User Hand Stiffness Estimation

    Get PDF
    Trajectory learning is one of the key components of robot Programming by Demonstration approaches, which in many cases, especially in industrial practice, aim at defining complex manipulation patterns. In order to enhance these methods, which are generally based on a physical interaction between the user and the robot, guided along the desired path, an additional input channel is considered in this article. The hand stiffness, that the operator continuously modulates during the demonstration, is estimated from the forearm surface electromyography and translated into a request for a higher or lower accuracy level. Then, a constrained optimization problem is built (and solved) in the framework of smoothing B-splines to obtain a minimum curvature trajectory approximating, in this manner, the taught path within the precision imposed by the user. Experimental tests in different applicative scenarios, involving both position and orientation, prove the benefits of the proposed approach in terms of the intuitiveness of the programming procedure for the human operator and characteristics of the final motion

    Robot Programming by Demonstration: Trajectory Learning Enhanced by sEMG-Based User Hand Stiffness Estimation

    Get PDF
    Trajectory learning is one of the key components of robot Programming by Demonstration approaches, which in many cases, especially in industrial practice, aim at defining complex manipulation patterns. In order to enhance these methods, which are generally based on a physical interaction between the user and the robot, guided along the desired path, an additional input channel is considered in this article. The hand stiffness, that the operator continuously modulates during the demonstration, is estimated from the forearm surface electromyography and translated into a request for a higher or lower accuracy level. Then, a constrained optimization problem is built (and solved) in the framework of smoothing B-splines to obtain a minimum curvature trajectory approximating, in this manner, the taught path within the precision imposed by the user. Experimental tests in different applicative scenarios, involving both position and orientation, prove the benefits of the proposed approach in terms of the intuitiveness of the programming procedure for the human operator and characteristics of the final motion

    iMOVE: Development of a hybrid control interface based on sEMG and movement signals for an assistive robotic manipulator

    Get PDF
    For many people with upper limb disabilities, simple activities of daily living such as drinking, opening a door, or pushing an elevator button require the assistance of a caregiver; which reduces the independence of the individual. Assistive robotic systems controlled via human-robot interface could enable these people to perform this kind of tasks autonomously again and thereby increase their independence and quality of life. Moreover, this interface could encourage rehabilitation of motor functions because the individual would require to perform its remaining body movements and muscle activity to provide control signals. This project aims at developing a novel hybrid control interface that combines remaining movements and muscle activity of the upper body to control position and impedance of a robotic manipulator. This thesis presents a Cartesian position control system for KINOVA Gen3 robotic arm, which performs a proportional-derivative control low based to the Jacobian transpose method, that does not require inverse kinematics. A second control is proposed to change the robot’s rigidity in real-time based on measurements of muscle activity (sEMG). This control allows the user to modulate the robot’s impedance while performing a task. Moreover, it presents a body-machine interface that maps the motions of the upper body (head and shoulders) to the space of robot control signals. Its uses the principal component analysis algorithm for dimensionality reduction. The results demonstrate that combining the three methods presented above, the user can control robot positions with head and shoulders movements, while also adapting the robot’s impedance depending on its muscle activation. In the future work the performance of this system is going to be tested in patients with severe movement impairments

    Neural learning enhanced variable admittance control for human-robot collaboration

    Get PDF
    © 2013 IEEE. In this paper, we propose a novel strategy for human-robot impedance mapping to realize an effective execution of human-robot collaboration. The endpoint stiffness of the human arm impedance is estimated according to the configurations of the human arm and the muscle activation levels of the upper arm. Inspired by the human adaptability in collaboration, a smooth stiffness mapping between the human arm endpoint and the robot arm joint is developed to inherit the human arm characteristics. The estimation of stiffness term is generalized to full impedance by additionally considering the damping and mass terms. Once the human arm impedance estimation is completed, a Linear Quadratic Regulator is employed for the calculation of the corresponding robot arm admittance model to match the estimated impedance parameters of the human arm. Under the variable admittance control, robot arm is governed to be complaint to the human arm impedance and the interaction force exerted by the human arm endpoint, thus the relatively optimal collaboration can be achieved. The radial basis function neural network is employed to compensate for the unknown dynamics to guarantee the performance of the controller. Comparative experiments have been conducted to verify the validity of the proposed technique

    User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction

    Get PDF
    The robotic technologies have been well developed recently in various fields, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions effectively still remains un-resolved. The current artificial intelligence (AI) technology does not support robots to fulfil complex tasks without human’s guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research fields. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiffness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots offering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo camera’s view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning efficiency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humans’ upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robot’s perspective.Comparative experiments have been performed to demonstrate the effectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design

    sEMG-based natural control interface for a variable stiffness transradial hand prosthesis

    Get PDF
    We propose, implement, and evaluate a natural human-machine control interface for a variable stiffness transradial hand prosthesis that achieves tele-impedance control through surface electromyography (sEMG) signals. This interface, together with variable stiffness actuation (VSA), enables an amputee to modulate the impedance of the prosthetic limb to properly match the requirements of a task while performing activities of daily living (ADL). Both the desired position and stiffness references are estimated through sEMG signals and used to control the VSA hand prosthesis. In particular, regulation of hand impedance is managed through the impedance measurements of the intact upper arm; this control takes place naturally and automatically as the amputee interacts with the environment, while the position of the hand prosthesis is regulated intentionally by the amputee through the estimated position of the shoulder. The proposed approach is advantageous since the impedance regulation takes place naturally without requiring amputees' attention and diminishing their functional capability. Consequently, the proposed interface is easy to use, does not require long training periods or interferes with the control of intact body segments. This control approach is evaluated through human subject experiments conducted over able volunteers where adequate estimation of references and independent control of position and stiffness are demonstrated.Turkiye Bilimsel ve Teknolojik Arastirma Kurumu (TUBITAK) ; 219M58

    Strategies for control of neuroprostheses through Brain-Machine Interfaces

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.Includes bibliographical references (p. 145-153).The concept of brain controlled machines sparks our imagination with many exciting possibilities. One potential application is in neuroprostheses for paralyzed patients or amputees. The quality of life of those who have extremely limited motor abilities can potentially be improved if we have a means of inferring their motor intent from neural signals and commanding a robotic device that can be controlled to perform as a smart prosthesis. In our recent demonstration of such Brain Machine Interfaces (BMIs) monkeys were able to control a robot arm in 3-D motion directly, due to advances in accessing, recording, and decoding electrical activity of populations of single neurons in the brain, together with algorithms for driving robotic devices with the decoded neural signals in real time. However, such demonstrations of BMI thus far have been limited to simple position control of graphical cursors or robots in free space with non-human primates. There still remain many challenges in reducing this technology to practice in a neuroprosthesis for humans. The research in this thesis introduces strategies for optimizing the information extracted from the recorded neural signals, so that a practically viable and ultimately useful neuroprosthesis can be achieved. A framework for incorporating robot sensors and reflex like behavior has been introduced in the form of Continuous Shared Control. The strategy provides means for more steady and natural movement by compensating for the natural reflexes that are absent in direct brain control. The Muscle Activation Method, an alternative decoding algorithm for extracting motor parameters from the neural activity, has been presented.(cont.) The method allows the prosthesis to be controlled under impedance control, which is similar to how our natural limbs are controlled. Using this method, the prosthesis can perform a much wider range in of tasks in partially known and unknown environments. Finally preparations have been made for clinical trials with humans, which would signify a major step in reaching the ultimate goal of human brain operated machines.by Hyun K. Kim.Ph.D

    An improvement of robot stiffness-adaptive skill primitive generalization using the surface electromyography in human–robot collaboration

    Get PDF
    Learning from Demonstration in robotics has proved its efficiency in robot skill learning. The generalization goals of most skill expression models in real scenarios are specified by humans or associated with other perceptual data. Our proposed framework using the Probabilistic Movement Primitives (ProMPs) modeling to resolve the shortcomings of the previous research works; the coupling between stiffness and motion is inherently established in a single model. Such a framework can request a small amount of incomplete observation data to infer the entire skill primitive. It can be used as an intuitive generalization command sending tool to achieve collaboration between humans and robots with human-like stiffness modulation strategies on either side. Experiments (human–robot hand-over, object matching, pick-and-place) were conducted to prove the effectiveness of the work. Myo armband and Leap motion camera are used as surface electromyography (sEMG) signal and motion capture sensors respective in the experiments. Also, the experiments show that the proposed framework strengthened the ability to distinguish actions with similar movements under observation noise by introducing the sEMG signal into the ProMP model. The usage of the mixture model brings possibilities in achieving automation of multiple collaborative tasks
    corecore