8 research outputs found

    Learning Algorithm Design for Human-Robot Skill Transfer

    Get PDF
    In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character

    Kinematics and Robot Design II (KaRD2019) and III (KaRD2020)

    Get PDF
    This volume collects papers published in two Special Issues “Kinematics and Robot Design II, KaRD2019” (https://www.mdpi.com/journal/robotics/special_issues/KRD2019) and “Kinematics and Robot Design III, KaRD2020” (https://www.mdpi.com/journal/robotics/special_issues/KaRD2020), which are the second and third issues of the KaRD Special Issue series hosted by the open access journal robotics.The KaRD series is an open environment where researchers present their works and discuss all topics focused on the many aspects that involve kinematics in the design of robotic/automatic systems. It aims at being an established reference for researchers in the field as other serial international conferences/publications are. Even though the KaRD series publishes one Special Issue per year, all the received papers are peer-reviewed as soon as they are submitted and, if accepted, they are immediately published in MDPI Robotics. Kinematics is so intimately related to the design of robotic/automatic systems that the admitted topics of the KaRD series practically cover all the subjects normally present in well-established international conferences on “mechanisms and robotics”.KaRD2019 together with KaRD2020 received 22 papers and, after the peer-review process, accepted only 17 papers. The accepted papers cover problems related to theoretical/computational kinematics, to biomedical engineering and to other design/applicative aspects

    Bringing a Humanoid Robot Closer to Human Versatility : Hard Realtime Software Architecture and Deep Learning Based Tactile Sensing

    Get PDF
    For centuries, it has been a vision of man to create humanoid robots, i.e., machines that not only resemble the shape of the human body, but have similar capabilities, especially in dextrously manipulating their environment. But only in recent years it has been possible to build actual humanoid robots with many degrees of freedom (DOF) and equipped with torque controlled joints, which are a prerequisite for sensitively acting in the world. In this thesis, we extend DLR's advanced mobile torque controlled humanoid robot Agile Justin into two important directions to get closer to human versatility. First, we enable Agile Justin, which was originally built as a research platform for dextrous mobile manipulation, to also be able to execute complex dynamic manipulation tasks. We demonstrate this with the challenging task of catching up to two simultaneously thrown balls with its hands. Second, we equip Agile Justin with highly developed and deep learning based tactile sensing capabilities that are critical for dextrous fine manipulation. We demonstrate its tactile capabilities with the delicate task of identifying an objects material simply by gently sweeping with a fingertip over its surface. Key for the realization of complex dynamic manipulation tasks is a software framework that allows for a component based system architecture to cope with the complexity and parallel and distributed computational demands of deep sensor-perception-planning-action loops -- but under tight timing constraints. This thesis presents the communication layer of our aRDx (agile robot development -- next generation) software framework that provides hard realtime determinism and optimal transport of data packets with zero-copy for intra- and inter-process and copy-once for distributed communication. In the implementation of the challenging ball catching application on Agile Justin, we take full advantage of aRDx's performance and advanced features like channel synchronization. Besides developing the challenging visual ball tracking using only onboard sensing while everything is moving and the automatic and self-contained calibration procedure to provide the necessary precision, the major contribution is the unified generation of the reaching motion for the arms. The catch point selection, motion planning and the joint interpolation steps are subsumed in one nonlinear constrained optimization problem which is solved in realtime and allows for the realization of different catch behaviors. For the highly sensitive task of tactile material classification with a flexible pressure-sensitive skin on Agile Justin's fingertip, we present our deep convolutional network architecture TactNet-II. The input is the raw 16000 dimensional complex and noisy spatio-temporal tactile signal generated when sweeping over an object's surface. For comparison, we perform a thorough human performance experiment with 15 subjects which shows that Agile Justin reaches superhuman performance in the high-level material classification task (What material id?), as well as in the low-level material differentiation task (Are two materials the same?). To increase the sample efficiency of TactNet-II, we adapt state of the art deep end-to-end transfer learning to tactile material classification leading to an up to 15 fold reduction in the number of training samples needed. The presented methods led to six publication awards and award finalists and international media coverage but also worked robustly at many trade fairs and lab demos

    Dynamic Grasp Adaptation:From Humans To Robots

    Get PDF
    The human hand is an amazing tool, demonstrated by its incredible motor capability and remarkable sense of touch. To enable robots to work in a human-centric environment, it is desirable to endow robotic hands with human-like capabilities for grasping and object manipulation. However, due to its inherent complexity and inevitable model uncertainty, robotic grasping and manipulation remains a challenge. This thesis focuses on grasp adaptation in the face of model and sensing uncertainties: Given an object whose properties are not known with certainty (e.g., shape, weight and external perturbation), and a multifingered robotic hand, we aim at determining where to put the fingers and how the fingers should adaptively interact with the object using tactile sensing, in order to achieve either a stable grasp or a desired dynamic behaviour. A central idea in this thesis is the object-centric dynamics: namely, that we express all control constraints into an object-centric representation. This simplifies computa- tion and makes the control versatile to the type of hands. This is an essential feature that distinguishes our work from other robust grasping work in the literature, where generating a static stable grasp for a given hand is usually the primary goal. In this thesis, grasp adaptation is a dynamic process that flexibly adapts the grasp to fit some purpose from the objectâs perspective, in the presence of a variety of uncertainties and/or perturbations. When building a grasp adaptation for a given situation, there are two key problems that must be addressed: 1) the problem of choosing an initial grasp that is suitable for future adaptation, and more importantly 2) the problem of design- ing an adaptation strategy that can react adequately to achieve desired behaviour of the grasped object. To address challenge 1 (planning a grasp under shape uncertainty), we propose an approach to parameterizing the uncertainty in object shape using Gaussian Processes (GPs) and incorporate it as a constraint into contact-level grasp planning. To realize the planned contacts using different hands interchangeably, we further develop a prob- abilistic model to predict the feasible hand configurations, including hand pose and finger joints, given the desired contact points only. The model is built using the con- cept of Virtual Frame(VF), and it is independent from the choice of hand frame and object frame. The performance of the proposed approach is validated on two differ- ent robotic hands, an industrial gripper (4 DOF Barrett hand) and a humanoid hand (16 DOF Allegro hand) to manipulate objects of daily use with complex geometry and various texture (a spray bottle, a tea caddy, a jug and a bunny toy). In the second part of this thesis, we propose an approach to the design of adapta- tion strategy to ensure grasp stability in the presence of physical uncertainties of objects(object weight, friction at contacts and external perturbation). Based on an object-level impedance controller, we first design a grasp stability estimator in the object frame using the grasp experience and tactile sensing. Once a grasp is predicted to be unstable during online execution, the grasp adaptation strategy is triggered to improve the grasp stability, by either changing the stiffness at finger level or relocating the position of one fingertip to a better area

    Trust in Robots

    Get PDF
    Robots are increasingly becoming prevalent in our daily lives within our living or working spaces. We hope that robots will take up tedious, mundane or dirty chores and make our lives more comfortable, easy and enjoyable by providing companionship and care. However, robots may pose a threat to human privacy, safety and autonomy; therefore, it is necessary to have constant control over the developing technology to ensure the benevolent intentions and safety of autonomous systems. Building trust in (autonomous) robotic systems is thus necessary. The title of this book highlights this challenge: “Trust in robots—Trusting robots”. Herein, various notions and research areas associated with robots are unified. The theme “Trust in robots” addresses the development of technology that is trustworthy for users; “Trusting robots” focuses on building a trusting relationship with robots, furthering previous research. These themes and topics are at the core of the PhD program “Trust Robots” at TU Wien, Austria

    Haptic Coupling with Augmented Feedback between Two KUKA Light-Weight Robots and the PR2 Robot Arms

    No full text
    This paper discusses the theoretical background and practical implementation of a large-scale, lowperformance haptic remote control setup. The experimental system consists of a pair of KUKA Light Weight Robots (LWR) coupled to a Willow Garage Personal Robot (PR2) via two different robotic frameworks. The haptic “performance” is, of course, not comparable to dedicated haptic applications, but has its use as a test-bed for interaction between “legacy” service robot systems, that have not been especially designed for mutual haptic interaction. We discuss some major application problems, and the future work needed for nonuniform robot coupling. Beside haptic coupling, we provide the human operator with visual feedback. To this end, the head movements of the human operator are coupled to the head movement of the PR2 and the images of the eye cameras are displayed to the human operator using a wearable display. The presented teleoperation application is furthermore an example of the integration of two component-based robotic frameworks namely OROCOS (Open Robot Control Software)and ROS (Robot Operating System) Experimental results regarding the haptic coupling are presented using an “artistic” painting task for qualitative results, and a hard contact at the slave side for quantitative results

    Controle por modo deslizante de robĂ´s mĂłveis sobre rodas

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2013O controle de robôs móveis não holonômicos é um problema para o qual existem lacunas a serem preenchidas. As principais técnicas de controle têm desempenho limitado no tocante à robustez e implementação prática e ainda dificuldades no tratamento de restrições não holonômicas. O controle por modo deslizante é uma técnica que se mostra bastante adequada para tratar este problema, devido a sua característica de oferecer robustez restringindo o sistema. Todavia, a implementação prática da sua forma clássica, o controle por modos deslizantes de primeira ordem, sofre com efeitos de chattering, devido à excitação de dinâmicas rápidas negligenciadas e a limitação na frequência de chaveamento do sinal de controle. Algumas soluções conhecidas para compensar o chattering têm como desvantagem a redução de robustez. Uma técnica de controle por modo deslizante de segunda ordem é considerada como solução, pois minimiza o chattering mantendo suas propriedades de robustez. Trata-se do algoritmo super- twisting que além das características enumeradas, possui implementação simples e tem bom desempenho numérico. Neste trabalho, aborda-se o problema de controle de rastreamento de trajetória para um robô móvel sujeito a restrições não holonômicas cuja representação de estado é feita com um modelo cinemático em cascata com um modelo dinâmico. A solução proposta nesta tese é a síntese de uma estrutura de controle composta por um controlador cinemático e um controlador dinâmico. O controlador cinemático é sintetizado com a técnica de controle super-twisting e tem como principal produto restrições que ao serem impostas ao sistema garantem o rasteamento robusto de trajetórias. Para isso, gera um sinal de controle em velocidade a ser rastreado pelo controlador dinâmico, que consiste de uma lei de controle por dinâmica inversa com um controlador externo proporcional e derivativo (PD). O controle PD auxilia na redução de chattering, pois sua ação diminui a influência das dinâmicas negligenciadas. Para ilustrar as características dos controladores propostos, são apresentados resultados de simulação e experimentos obtidos em ensaios com um robô móvel sobre rodas diferencial de médio porte The control of mobile robots is still an open problem. The main control techniques have limited performance with respect to robustness and practical implementation and yet some difficulties in handlind nonholonomic restrictions. The sliding mode control is a technique that proves to be quite adequate to address this problem, due to its characteristic of offering robustness by constraining the system. However, the practical implementation of the classic form of this technique, the first order sliding mode control, suffers from chattering effects, due to the excitation of neglected fast dynamic and frequency limitation of the switching control signal. Some known solutions to overcome the chattering has the disadvantage of reducing the ideal robustness of the technique. A second order sliding mode control technique is considered as a solution since it minimizes this problem maintaining its robustness properties. This is the super-twisting algorithm that in addition to the features listed, its implementation is simple and has good numerical performance. This work addresses the trajectory tracking control problem for a mobile robot subject to nonholonomic constraints and represented in the state space by a kinematic model in cascade with a dynamic model. The proposed solution in this thesis is the synthesis of a control structure comprising a kinematic controller and a dynamic one. The kinematic controller is designed with the super-twisting control technique and has as main product restrictions that when imposed to the system ensure the robust trajectory tracking. For that, it generates a velocity control signal to be tracked by the dynamic controller, which consists of an inverse dynamic control law with proportional plus derivative (PD) control. The additional PD control law plays an important role in assisting in the reduction of chattering, as its action decreases the influence of neglected dynamics. To illustrate the characteristics of the proposed controllers, simulation and also experimental results are obtained in trials with a differential wheeled mobile robot
    corecore