1,324 research outputs found

    A Factor Graph Approach to Multi-Camera Extrinsic Calibration on Legged Robots

    Full text link
    Legged robots are becoming popular not only in research, but also in industry, where they can demonstrate their superiority over wheeled machines in a variety of applications. Either when acting as mobile manipulators or just as all-terrain ground vehicles, these machines need to precisely track the desired base and end-effector trajectories, perform Simultaneous Localization and Mapping (SLAM), and move in challenging environments, all while keeping balance. A crucial aspect for these tasks is that all onboard sensors must be properly calibrated and synchronized to provide consistent signals for all the software modules they feed. In this paper, we focus on the problem of calibrating the relative pose between a set of cameras and the base link of a quadruped robot. This pose is fundamental to successfully perform sensor fusion, state estimation, mapping, and any other task requiring visual feedback. To solve this problem, we propose an approach based on factor graphs that jointly optimizes the mutual position of the cameras and the robot base using kinematics and fiducial markers. We also quantitatively compare its performance with other state-of-the-art methods on the hydraulic quadruped robot HyQ. The proposed approach is simple, modular, and independent from external devices other than the fiducial marker.Comment: To appear on "The Third IEEE International Conference on Robotic Computing (IEEE IRC 2019)

    Hybrid Controller for Robot Manipulators in Task-Space with Visual-Inertial Feedback

    Full text link
    This paper presents a visual-inertial-based control strategy to address the task space control problem of robot manipulators. To this end, an observer-based hybrid controller is employed to control end-effector motion. In addition, a hybrid observer is introduced for a visual-inertial navigation system to close the control loop directly at the Cartesian space by estimating the end-effector pose. Accordingly, the robot tip is equipped with an inertial measurement unit (IMU) and a stereo camera to provide task-space feedback information for the proposed observer. It is demonstrated through the Lyapunov stability theorem that the resulting closed-loop system under the proposed observer-based controller is globally asymptotically stable. Besides this notable merit (global asymptotic stability), the proposed control method eliminates the need to compute inverse kinematics and increases trajectory tracking accuracy in task-space. The effectiveness and accuracy of the proposed control scheme are evaluated through computer simulations, where the proposed control structure is applied to a 6 degrees-of-freedom long-reach hydraulic robot manipulator

    Calibration and Control of a Redundant Robotic Workcell for Milling Tasks

    Full text link
    This article deals with the tuning of a complex robotic workcell of eight joints devoted to milling tasks. It consists of a KUKA (TM) manipulator mounted on a linear track and synchronised with a rotary table. Prior to any machining, the additional joints require an in situ calibration in an industrial environment. For this purpose, a novel planar calibration method is developed to estimate the external joint configuration parameters by means of a laser displacement sensor and avoiding direct contact with the pattern. Moreover, a redundancy resolution scheme on the joint rate level is integrated within a computer aided manufacturing system for the complete control of the workcell during the path tracking of a milling task. Finally, the whole system is tested in the prototyping of an orographic model.Andres De La Esperanza, FJ.; Gracia Calandin, LI.; Tornero Montserrat, J. (2011). Calibration and Control of a Redundant Robotic Workcell for Milling Tasks. International Journal of Computer Integrated Manufacturing. 24(6):561-573. doi:10.1080/0951192X.2011.566284S56157324

    RRR-robot : design of an industrial-like test facility for nonlinear robot control

    Get PDF

    Toward Vision-based Control of Heavy-Duty and Long-Reach Robotic Manipulators

    Get PDF
    Heavy-duty mobile machines are an important part of the industry, and they are used for various work tasks in mining, construction, forestry, and agriculture. Many of these machines have heavy-duty, long-reach (HDLR) manipulators attached to them, which are used for work tasks such as drilling, lifting, and grabbing. A robotic manipulator, by definition, is a device used for manipulating materials without direct physical contact by a human operator. HDLR manipulators differ from manipulators of conventional industrial robots in the sense that they are subject to much larger kinematic and non-kinematic errors, which hinder the overall accuracy and repeatability of the robot’s tool center point (TCP). Kinematic errors result from modeling inaccuracies, while non-kinematic errors include structural flexibility and bending, thermal effects, backlash, and sensor resolution. Furthermore, conventional six degrees of freedom (DOF) industrial robots are more general-purpose systems, whereas HDLR manipulators are mostly designed for special (or single) purposes. HDLR manipulators are typically built as lightweight as possible while being able to handle significant load masses. Consequently, they have long reaches and high payload-to-own-weight ratios, which contribute to the increased errors compared to conventional industrial robots. For example, a joint angle measurement error of 0.5◦ associated with a 5-m-long rigid link results in an error of approximately 4.4 cm at the end of the link, with further errors resulting from flexibility and other non-kinematic aspects. The target TCP positioning accuracy for HDLR manipulators is in the sub-centimeter range, which is very difficult to achieve in practical systems. These challenges have somewhat delayed the automation of HDLR manipulators, while conventional industrial robots have long been commercially available. This is also attributed to the fact that machines with HDLR manipulators have much lower production volumes, and the work tasks are more non-repetitive in nature compared to conventional industrial robots in factories. Sensors are a key requirement in order to achieve automated operations and eventually full autonomy. For example, humans mostly rely on their visual perception in work tasks, while the collected information is processed in the brain. Much like humans, autonomous machines also require both sensing and intelligent processing of the collected sensor data. This dissertation investigates new visual sensing solutions for HDLR manipulators, which are striving toward increased automation levels in various work tasks. The focus is on visual perception and generic 6 DOF TCP pose estimation of HDLR manipulators in unknown (or unstructured) environments. Methods for increasing the robustness and reliability of visual perception systems are examined by exploiting sensor redundancy and data fusion. Vision-aided control using targetless, motion-based local calibration between an HDLR manipulator and a visual sensor is also proposed to improve the absolute positioning accuracy of the TCP despite the kinematic and non-kinematic errors present in the system. It is experimentally shown that a sub-centimeter TCP positioning accuracy was reliably achieved in the tested cases using a developed trajectory-matching-based method. Overall, this compendium thesis includes four publications and one unpublished manuscript related to these topics. Two main research problems, inspired by the industry, are considered and investigated in the presented publications. The outcome of this thesis provides insight into possible applications and benefits of advanced visual perception systems for HDLR manipulators in dynamic, unstructured environments. The main contribution is related to achieving sub-centimeter TCP positioning accuracy for an HDLR manipulator using a low-cost camera. The numerous challenges and complexities related to HDLR manipulators and visual sensing are also highlighted and discussed

    Telerobotic Sensor-based Tool Control Derived From Behavior-based Robotics Concepts

    Get PDF
    @font-face { font-family: TimesNewRoman ; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: Times New Roman ; }div.Section1 { page: Section1; } Teleoperated task execution for hazardous environments is slow and requires highly skilled operators. Attempts to implement telerobotic assists to improve efficiency have been demonstrated in constrained laboratory environments but are not being used in the field because they are not appropriate for use on actual remote systems operating in complex unstructured environments using typical operators. This work describes a methodology for combining select concepts from behavior-based systems with telerobotic tool control in a way that is compatible with existing manipulator architectures used by remote systems typical to operations in hazardous environment. The purpose of the approach is to minimize the task instance modeling in favor of a priori task type models while using sensor information to register the task type model to the task instance. The concept was demonstrated for two tools useful to decontamination & dismantlement type operations—a reciprocating saw and a powered socket tool. The experimental results demonstrated that the approach works to facilitate traded control telerobotic tooling execution by enabling difficult tasks and by limiting tool damage. The role of the tools and tasks as drivers to the telerobotic implementation was better understood in the need for thorough task decomposition and the discovery and examination of the tool process signature. The contributions of this work include: (1) the exploration and evaluation of select features of behavior-based robotics to create a new methodology for integrating telerobotic tool control with positional teleoperation in the execution of complex tool-centric remote tasks, (2) the simplification of task decomposition and the implementation of sensor-based tool control in such a way that eliminates the need for the creation of a task instance model for telerobotic task execution, and (3) the discovery, demonstrated use, and documentation of characteristic tool process signatures that have general value in the investigation of other tool control, tool maintenance, and tool development strategies above and beyond the benefit sustained for the methodology described in this work

    Improved Kinematics Calibration of Industrial Robots by Neural Networks

    Get PDF
    The paper presents a preliminary study on the feasibility of a Neural Networks based methodology for the calibration of Industrial Manipulators to improve their accuracy. A Neural Network is used to predict the pose inaccuracy due to general sources of error in the robot (e.g. geometrical inaccuracy, load deflection, stiffness and backlash of the mechanical members, etc. . . ). The network is trained comparing the ideal model of the robot with measures of the actual poses reached by the robot. A back-propagation learning algorithm is applied. The Neural Network output can be used by the robot controller to compensate for the errors in the pose. The proposed calibration technique appears extremely simple. It does not need any information on the pose errors nature, but only the ideal robot kinematics and a set of experimental pose measures. Different schemes of calibration procedures are applied to a simulated SCARA robot and to a Stewart Platform and compared, in order to select the most suitable. Results of the simulations are presented and discussed

    An effective strategy of real-time vision-based control for a Stewart platform

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksA Stewart platform is a kind of parallel robot which can be used for a wide variety of technological and industrial applications. In this paper, a Stewart platform designed and assembled at the Universitat Polit`ecnica de Catalunya (UPC) by our research group is presented. The main objective is to overcome the enormous difficulties that arise when a real-time vision-based control of a fast moving object placed on these mechanisms is required. In addition, a description of its geometric characteristics, the calibration process, together with an illustrative experiment to demonstrate the good behavior of the platform is given.Postprint (author's final draft
    corecore