2,943 research outputs found

    Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters

    Full text link
    Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201

    Two-Stage Transfer Learning for Heterogeneous Robot Detection and 3D Joint Position Estimation in a 2D Camera Image using CNN

    Full text link
    Collaborative robots are becoming more common on factory floors as well as regular environments, however, their safety still is not a fully solved issue. Collision detection does not always perform as expected and collision avoidance is still an active research area. Collision avoidance works well for fixed robot-camera setups, however, if they are shifted around, Eye-to-Hand calibration becomes invalid making it difficult to accurately run many of the existing collision avoidance algorithms. We approach the problem by presenting a stand-alone system capable of detecting the robot and estimating its position, including individual joints, by using a simple 2D colour image as an input, where no Eye-to-Hand calibration is needed. As an extension of previous work, a two-stage transfer learning approach is used to re-train a multi-objective convolutional neural network (CNN) to allow it to be used with heterogeneous robot arms. Our method is capable of detecting the robot in real-time and new robot types can be added by having significantly smaller training datasets compared to the requirements of a fully trained network. We present data collection approach, the structure of the multi-objective CNN, the two-stage transfer learning training and test results by using real robots from Universal Robots, Kuka, and Franka Emika. Eventually, we analyse possible application areas of our method together with the possible improvements.Comment: 6+n pages, ICRA 2019 submissio

    Printing-while-moving: a new paradigm for large-scale robotic 3D Printing

    Full text link
    Building and Construction have recently become an exciting application ground for robotics. In particular, rapid progress in materials formulation and in robotics technology has made robotic 3D Printing of concrete a promising technique for in-situ construction. Yet, scalability remains an important hurdle to widespread adoption: the printing systems (gantry- based or arm-based) are often much larger than the structure to be printed, hence cumbersome. Recently, a mobile printing system - a manipulator mounted on a mobile base - was proposed to alleviate this issue: such a system, by moving its base, can potentially print a structure larger than itself. However, the proposed system could only print while being stationary, imposing thereby a limit on the size of structures that can be printed in a single take. Here, we develop a system that implements the printing-while-moving paradigm, which enables printing single-piece structures of arbitrary sizes with a single robot. This development requires solving motion planning, localization, and motion control problems that are specific to mobile 3D Printing. We report our framework to address those problems, and demonstrate, for the first time, a printing-while-moving experiment, wherein a 210 cm x 45 cm x 10 cm concrete structure is printed by a robot arm that has a reach of 87 cm.Comment: 6 pages, 7 figur

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis

    Dynamic update of a virtual cell for programming and safe monitoring of an industrial robot

    Get PDF
    A hardware/software architecture for robot motion planning and on-line safe monitoring has been developed with the objective to assure high flexibility in production control, safety for workers and machinery, with user-friendly interface. The architecture, developed using Microsoft Robotics Developers Studio and implemented for a six-dof COMAU NS 12 robot, established a bidirectional communication between the robot controller and a virtual replica of the real robotic cell. The working space of the real robot can then be easily limited for safety reasons by inserting virtual objects (or sensors) in such a virtual environment. This paper investigates the possibility to achieve an automatic, dynamic update of the virtual cell by using a low cost depth sensor (i.e., a commercial Microsoft Kinect) to detect the presence of completely unknown objects, moving inside the real cell. The experimental tests show that the developed architecture is able to recognize variously shaped mobile objects inside the monitored area and let the robot stop before colliding with them, if the objects are not too small
    • …
    corecore