80 research outputs found

    Learning Task Constraints from Demonstration for Hybrid Force/Position Control

    Full text link
    We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from the demonstrated kinematic motion, such as frictional forces between the end-effector and the contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive (DMP) framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.Comment: Under revie

    Segmentation and generalisation for writing skills transfer from humans to robots

    Get PDF
    In this study, the authors present an enhanced generalised teaching by demonstration technique for a KUKA iiwa robot. Movements are recorded from a human operator, and then the recorded data are sent to be segmented via MATLAB by using the difference method (DV). The outputted trajectories data are used to model a non-linear system named dynamic movement primitive (DMP). For the purpose of learning from multiple demonstrations correctly and accurately, the Gaussian mixture model is employed for the evaluation of the DMP in order to modelling multiple trajectories by the teaching of demonstrator. Furthermore, a synthesised trajectory with smaller position errors in 3D space has been successfully generated by the usage of the Gaussian mixture regression algorithm. The proposed approach has been tested and demonstrated by performing a Chinese characters writing task with a KUKA iiwa robot

    Development of writing task recombination technology based on DMP segmentation via verbal command for Baxter robot

    Get PDF
    This paper developed a character recombination technology based on dynamic movement primitive (DMP) segmentation using verbal command on a Baxter robot platform. Movements are recorded from a human demonstrator. The operator physically guides the Baxter robot to perform the movements for five times. This training data set is also utilized for playback process. Subsequently, the dynamic time warping is employed to pre-treat the data. The DMP is used to model and generalize every single movement. Gaussian mixture model is used to generate multiple patterns after the teaching process. Then the Gaussian mixture regression algorithm is applied to reduce the position errors in 3D space after the generation of a synthesized trajectory. A remote PC is used to control the command of Baxter to record or playback any trajectories via user datagram protocol (UDP) by typing commands in a text file. In addition, Dragon NaturalSpeaking software is used to transfer the voice data to text data. This proposed approach is tested by performing a Chinese character writing task with a Baxter robot, where different Chinese characters are written by teaching only one character

    A robotic learning and generalization framework for curved surface based on modified DMP

    Get PDF
    Learning from demonstration (LfD) can enable robots to quickly obtain reference trajectory information. How to reproduce and generalize the skills acquired by demonstrating is a hot topic for researchers. Firstly, aiming at the drawback that many industrial robots were difficult to continuously and smoothly drag and demonstrate, a compliant continuous drag demonstration system based on discrete admittance model was designed. Then, in order to solve the problem of poor generalization ability of the classical dynamic movement primitive (DMP) on curved surface, the modified DMP contained the scaling factor and the force coupling term. Finally, the curve drawing experiments were carried out on a 6-DoF robot. Experimental results show the effectiveness of our proposed learning and generalization framework

    Stitching Dynamic Movement Primitives and Image-based Visual Servo Control

    Full text link
    Utilizing perception for feedback control in combination with Dynamic Movement Primitive (DMP)-based motion generation for a robot's end-effector control is a useful solution for many robotic manufacturing tasks. For instance, while performing an insertion task when the hole or the recipient part is not visible in the eye-in-hand camera, a learning-based movement primitive method can be used to generate the end-effector path. Once the recipient part is in the field of view (FOV), Image-based Visual Servo (IBVS) can be used to control the motion of the robot. Inspired by such applications, this paper presents a generalized control scheme that switches between motion generation using DMPs and IBVS control. To facilitate the design, a common state space representation for the DMP and the IBVS systems is first established. Stability analysis of the switched system using multiple Lyapunov functions shows that the state trajectories converge to a bound asymptotically. The developed method is validated by two real world experiments using the eye-in-hand configuration on a Baxter research robot.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    An Affordable Upper-Limb Exoskeleton Concept for Rehabilitation Applications

    Get PDF
    In recent decades, many researchers have focused on the design and development of exoskeletons. Several strategies have been proposed to develop increasingly more efficient and biomimetic mechanisms. However, existing exoskeletons tend to be expensive and only available for a few people. This paper introduces a new gravity-balanced upper-limb exoskeleton suited for rehabilitation applications and designed with the main objective of reducing the cost of the components and materials. Regarding mechanics, the proposed design significantly reduces the motor torque requirements, because a high cost is usually associated with high-torque actuation. Regarding the electronics, we aim to exploit the microprocessor peripherals to obtain parallel and real-time execution of communication and control tasks without relying on expensive RTOSs. Regarding sensing, we avoid the use of expensive force sensors. Advanced control and rehabilitation features are implemented, and an intuitive user interface is developed. To experimentally validate the functionality of the proposed exoskeleton, a rehabilitation exercise in the form of a pick-and-place task is considered. Experimentally, peak torques are reduced by 89% for the shoulder and by 84% for the elbow

    Learning modular policies for robotics

    Get PDF
    corecore