1,261 research outputs found

    Experiences with the JPL telerobot testbed: Issues and insights

    Get PDF
    The Jet Propulsion Laboratory's (JPL) Telerobot Testbed is an integrated robotic testbed used to develop, implement, and evaluate the performance of advanced concepts in autonomous, tele-autonomous, and tele-operated control of robotic manipulators. Using the Telerobot Testbed, researchers demonstrated several of the capabilities and technological advances in the control and integration of robotic systems which have been under development at JPL for several years. In particular, the Telerobot Testbed was recently employed to perform a near completely automated, end-to-end, satellite grapple and repair sequence. The task of integrating existing as well as new concepts in robot control into the Telerobot Testbed has been a very difficult and timely one. Now that researchers have completed the first major milestone (i.e., the end-to-end demonstration) it is important to reflect back upon experiences and to collect the knowledge that has been gained so that improvements can be made to the existing system. It is also believed that the experiences are of value to the others in the robotics community. Therefore, the primary objective here will be to use the Telerobot Testbed as a case study to identify real problems and technological gaps which exist in the areas of robotics and in particular systems integration. Such problems have surely hindered the development of what could be reasonably called an intelligent robot. In addition to identifying such problems, researchers briefly discuss what approaches have been taken to resolve them or, in several cases, to circumvent them until better approaches can be developed

    A simple 5-DOF walking robot for space station application

    Get PDF
    Robots on the NASA space station have a potential range of applications from assisting astronauts during EVA (extravehicular activity), to replacing astronauts in the performance of simple, dangerous, and tedious tasks; and to performing routine tasks such as inspections of structures and utilities. To provide a vehicle for demonstrating the pertinent technologies, a simple robot is being developed for locomotion and basic manipulation on the proposed space station. In addition to the robot, an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss and a gravity compensation system to simulate a zero-gravity environment. The robot comprises two flexible links connected by a rotary joint, with a 2 degree of freedom wrist joints and grippers at each end. The grippers screw into threaded holes in the nodes of the space station truss, and enable it to walk by alternately shifting the base of support from one foot (gripper) to the other. Present efforts are focused on mechanical design, application of sensors, and development of control algorithms for lightweight, flexible structures. Long-range research will emphasize development of human interfaces to permit a range of control modes from teleoperated to semiautonomous, and coordination of robot/astronaut and multiple-robot teams

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    A tracker alignment framework for augmented reality

    Get PDF
    To achieve accurate registration, the transformations which locate the tracking system components with respect to the environment must be known. These transformations relate the base of the tracking system to the virtual world and the tracking system's sensor to the graphics display. In this paper we present a unified, general calibration method for calculating these transformations. A user is asked to align the display with objects in the real world. Using this method, the sensor to display and tracker base to world transformations can be determined with as few as three measurements

    The 3D model control of image processing

    Get PDF
    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator

    Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters

    Full text link
    Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201

    Automatic State Estimation of an Over-Sensored Robotic Manipulator

    Get PDF
    Há uma procura cada vez maior por manipuladores robóticos capazes de efetuar atividades complexas e versáteis. De modo a satisfazer essa necessidade, é essencial a implementação de técnicas de calibração e estimação expeditas como um primeiro passo para o correto uso do manipulador. Só após a resolução destes problemas é que se torna possível usar o manipulador para efetuar tarefas de mais alto nível, tais como posicionamento de uma ferramenta ou manipulação de objetos. Existem atualmente várias técnicas de calibração automática para uma larga gama de sensores, assim como vários filtros capazes da fusão das suas medidas. Esta dissertação tem como objetivo encontrar e implementar um conjunto destes algoritmos capazes de ser usados num manipulador genérico. As técnicas escolhidas são modulares, o que torna possível a sua utilização em diferentes configurações de manipuladores. Os requisitos para a sua utilização foram restringidos tanto quanto possível de modo a serem reutilizáveis por quem tem limitações a nível de equipamento. Um manipulador robótico previamente desenvolvido é usado como base para a implementação e teste dos diferentes métodos. Este está equipado com encoders incrementais, IMUs e células de carga. Um conjunto de métodos para a calibração dos sensores inerciais é descrito e as medidas calibradas são usadas em conjunto com as medidas dos encoders para determinar a pose do manipulador. São descritos dois modelos para a representação da pose do manipulador, os quais são usados no problema de estimação de estado. Um deles define o estado como a dinâmica dos ângulos de cada articulação. O outro usa a orientação de cada ligação do manipulador no espaço inercial. O estado do primeiro modelo é estimado usando o Unscented Kalman Filter e o segundo usando o Multiplicative Extended Kalman Filter. Os resultados da implementação são testados e métricas da performance dos métodos são obtidos usando o output dos algoritmos e um sistema de medição externo.There is an increasing demand of robotic manipulators for performing more complex and versatile tasks. In order to fulfill this need, expeditious calibration and estimation techniques are required as a first step for the correct usage of the manipulator. Only after these problems are solved, can it be used for higher level tasks such as generic tool placement and object manipulation. There are currently several techniques for automatic calibration of a wide variety of sensors, as well as several filters to fuse their data into useful information. This dissertation aims at finding a subset of these algorithms that could be used in a generic manipulator and should allow for its prompt use. The techniques used were chosen with the purpose of being modular and therefore usable in a wide variety of manipulators. They also assume a minimal amount of requirements necessary for their use, making them suitable for an unequipped user. A previously developed manipulator is used to realistically test the performance of the implemented methods. It is equipped with incremental encoders, inertial measurement units and load cells. A calibration methodology for the inertial sensors is described and the calibrated measurements are used together with the encoders' to determine the pose of the manipulator. Two models for the representation of the pose of the manipulator are described and used in the state estimation problem. One defines the state vector as the dynamics of the angles of each joint. The other uses the orientation of each link in an inertial frame independently. The state of the first model is estimated with the Unscented Kalman Filter and the second one with the Multiplicative Extended Kalman Filter. The results of implementation are tested and some performance metrics are obtained using both the algorithms' output and an external system
    corecore