858 research outputs found

    A Bio-Inspired Tensegrity Manipulator with Multi-DOF, Structurally Compliant Joints

    Full text link
    Most traditional robotic mechanisms feature inelastic joints that are unable to robustly handle large deformations and off-axis moments. As a result, the applied loads are transferred rigidly throughout the entire structure. The disadvantage of this approach is that the exerted leverage is magnified at each subsequent joint possibly damaging the mechanism. In this paper, we present two lightweight, elastic, bio-inspired tensegrity robotics arms which mitigate this danger while improving their mechanism's functionality. Our solutions feature modular tensegrity structures that function similarly to the human elbow and the human shoulder when connected. Like their biological counterparts, the proposed robotic joints are flexible and comply with unanticipated forces. Both proposed structures have multiple passive degrees of freedom and four active degrees of freedom (two from the shoulder and two from the elbow). The structural advantages demonstrated by the joints in these manipulators illustrate a solution to the fundamental issue of elegantly handling off-axis compliance.Comment: IROS 201

    Haptic-Based Shared-Control Methods for a Dual-Arm System

    Get PDF
    We propose novel haptic guidance methods for a dual-arm telerobotic manipulation system, which are able to deal with several different constraints, such as collisions, joint limits, and singularities. We combine the haptic guidance with shared-control algorithms for autonomous orientation control and collision avoidance meant to further simplify the execution of grasping tasks. The stability of the overall system in various control modalities is presented and analyzed via passivity arguments. In addition, a human subject study is carried out to assess the effectiveness and applicability of the proposed control approaches both in simulated and real scenarios. Results show that the proposed haptic-enabled shared-control methods significantly improve the performance of grasping tasks with respect to the use of classic teleoperation with neither haptic guidance nor shared control

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    GELLO: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators

    Full text link
    Imitation learning from human demonstrations is a powerful framework to teach robots new skills. However, the performance of the learned policies is bottlenecked by the quality, scale, and variety of the demonstration data. In this paper, we aim to lower the barrier to collecting large and high-quality human demonstration data by proposing GELLO, a general framework for building low-cost and intuitive teleoperation systems for robotic manipulation. Given a target robot arm, we build a GELLO controller that has the same kinematic structure as the target arm, leveraging 3D-printed parts and off-the-shelf motors. GELLO is easy to build and intuitive to use. Through an extensive user study, we show that GELLO enables more reliable and efficient demonstration collection compared to commonly used teleoperation devices in the imitation learning literature such as VR controllers and 3D spacemouses. We further demonstrate the capabilities of GELLO for performing complex bi-manual and contact-rich manipulation tasks. To make GELLO accessible to everyone, we have designed and built GELLO systems for 3 commonly used robotic arms: Franka, UR5, and xArm. All software and hardware are open-sourced and can be found on our website: https://wuphilipp.github.io/gello/

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots

    Bounded haptic teleoperation of a quadruped robot’s foot posture for sensing and manipulation

    Get PDF
    This paper presents a control framework to teleoperate a quadruped robot's foot for operator-guided haptic exploration of the environment. Since one leg of a quadruped robot typically only has 3 actuated degrees of freedom (DoFs), the torso is employed to assist foot posture control via a hierarchical whole-body controller. The foot and torso postures are controlled by two analytical Cartesian impedance controllers cascaded by a null space projector. The contact forces acting on supporting feet are optimized by quadratic programming (QP). The foot's Cartesian impedance controller may also estimate contact forces from trajectory tracking errors, and relay the force-feedback to the operator. A 7D haptic joystick, Sigma.7, transmits motion commands to the quadruped robot ANYmal, and renders the force feedback. Furthermore, the joystick's motion is bounded by mapping the foot's feasible force polytope constrained by the friction cones and torque limits in order to prevent the operator from driving the robot to slipping or falling over. Experimental results demonstrate the efficiency of the proposed framework.Comment: Under review. Video Available at https://www.youtube.com/watch?v=htI8202vfe

    Towards a self-collision aware teleoperation framework for compound robots

    Get PDF
    This work lays the foundations of a self-collision aware teleoperation framework for compound robots. The need of an haptic enabled system which guarantees self-collision and joint limits avoidance for complex robots is the main motivation behind this paper. The objective of the proposed system is to constrain the user to teleoperate a slave robot inside its safe workspace region through the application of force cues on the master side of the bilateral teleoperation system. A series of simulated experiments have been performed on the Kuka KMRiiwa mobile robot; however, due to its generality, the framework is prone to be easily extended to other robots. The experiments have shown the applicability of the proposed approach to ordinary teleoperation systems without altering their stability properties. The benefits introduced by this framework enable the user to safely teleoperate whichever complex robotic system without worrying about self-collision and joint limitations

    Sistema de aquisição de dados por interface háptica

    Get PDF
    Mestrado em Engenharia MecânicaNeste trabalho e apresentada uma interface háptica com realimentação de força para a teleoperação de um robô humanoide é que aborda um novo conceito destinado à aprendizagem por demonstração em robôs, denominado de ensino telecinestésico. A interface desenvolvida pretende promover o ensino cinestésico num ambiente de tele-robótica enriquecido pela virtualização háptica do ambiente e restrições do robô. Os dados recolhidos através desta poderão então ser usados em aprendizagem por demonstração, uma abordagem poderosa que permite aprender padrões de movimento sem a necessidade de modelos dinâmicos complexos, mas que geralmente é apresentada com demonstrações que não são fornecidas teleoperando os robôs. Várias experiências são referidas onde o ensino cinestésico em aprendizagem robótica foi utilizado com um sucesso considerável, bem como novas metodologias e aplicações com aparelhos hápticos. Este trabalho foi realizado com base na plataforma proprietária de 27 graus-de-liberdade do Projeto Humanoide da Universidade de Aveiro (PHUA), definindo novas methodologias de comando em tele-operação, uma nova abordagem de software e ainda algumas alterações ao hardware. Um simulador de corpo inteiro do robô em MATLAB SimMechanics é apresentado que é capaz de determinar os requisitos dinâmicos de binário de cada junta para uma dada postura ou movimento, exemplificando com um movimento efectuado para subir um degrau. Ir a mostrar algumas das potencialidades mas também algumas das limitações restritivas do software. Para testar esta nova abordagem tele-cinestésica são dados exemplos onde o utilizador pode desenvolver demonstrações interagindo fisicamente com o robô humanoide através de um joystick háptico PHANToM. Esta metodologia ir a mostrar que permite uma interação natural para o ensino e perceção tele-robóticos, onde o utilizador fornece instruções e correções funcionais estando ciente da dinâmica do sistema e das suas capacidades e limitações físicas. Ser a mostrado que a abordagem consegue atingir um bom desempenho mesmo com operadores inexperientes ou não familiarizados com o sistema. Durante a interação háptica, a informação sensorial e as ordens que guiam a uma tarefa específica podem ser gravados e posteriormente utilizados para efeitos de aprendizagem.In this work an haptic interface using force feedback for the teleoperation of a humanoid robot is presented, that approaches a new concept for robot learning by demonstration known as tele-kinesthethic teaching. This interface aims at promoting kinesthethic teaching in telerobotic environments enriched by the haptic virtualization of the robot's environment and restrictions. The data collected through this interface can later be in robot learning by demonstration, a powerful approach for learning motion patterns without complex dynamical models, but which is usually presented using demonstrations that are not provided by teleoperating the robots. Several experiments are referred where kinesthetic teaching for robot learning was used with considerable success, as well as other new methodologies and applications with haptic devices. This work was conducted on the proprietary 27 DOF University of Aveiro Humanoid Project (PHUA) robot, de ning new wiring and software solutions, as well as a new teleoperation command methodology. A MATLAB Sim- Mechanics full body robot simulator is presented that is able to determine dynamic joint torque requirements for a given robot movement or posture, exempli ed with a step climbing application. It will show some of the potentialities but also some restricting limitations of the software. To test this new tele-kinesthetic approach, examples are shown where the user can provide demonstrations by physically interacting with the humanoid robot through a PHANToM haptic joystick. This methodology will show that it enables a natural interface for telerobotic teaching and sensing, in which the user provides functional guidance and corrections while being aware of the dynamics of the system and its physical capabilities and / or constraints. It will also be shown that the approach can have a good performance even with inexperienced or unfamiliarized operators. During haptic interaction, the sensory information and the commands guiding the execution of a speci c task can be recorded and that data log from the human-robot interaction can be later used for learning purposes

    Learning to Navigate Cloth using Haptics

    Full text link
    We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A. Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm
    corecore