3,344 research outputs found

    Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters

    Full text link
    Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201

    Printing-while-moving: a new paradigm for large-scale robotic 3D Printing

    Full text link
    Building and Construction have recently become an exciting application ground for robotics. In particular, rapid progress in materials formulation and in robotics technology has made robotic 3D Printing of concrete a promising technique for in-situ construction. Yet, scalability remains an important hurdle to widespread adoption: the printing systems (gantry- based or arm-based) are often much larger than the structure to be printed, hence cumbersome. Recently, a mobile printing system - a manipulator mounted on a mobile base - was proposed to alleviate this issue: such a system, by moving its base, can potentially print a structure larger than itself. However, the proposed system could only print while being stationary, imposing thereby a limit on the size of structures that can be printed in a single take. Here, we develop a system that implements the printing-while-moving paradigm, which enables printing single-piece structures of arbitrary sizes with a single robot. This development requires solving motion planning, localization, and motion control problems that are specific to mobile 3D Printing. We report our framework to address those problems, and demonstrate, for the first time, a printing-while-moving experiment, wherein a 210 cm x 45 cm x 10 cm concrete structure is printed by a robot arm that has a reach of 87 cm.Comment: 6 pages, 7 figur

    Approximation of the inverse kinematics of a robotic manipulator using a neural network

    Get PDF
    A fundamental property of a robotic manipulator system is that it is capable of accurately following complex position trajectories in three-dimensional space. An essential component of the robotic control system is the solution of the inverse kinematics problem which allows determination of the joint angle trajectories from the desired trajectory in the Cartesian space. There are several traditional methods based on the known geometry of robotic manipulators to solve the inverse kinematics problem. These methods can become impractical in a robot-vision control system where the environmental parameters can alter. Artificial neural networks with their inherent learning ability can approximate the inverse kinematics function and do not require any knowledge of the manipulator geometry. This thesis concentrates on developing a practical solution using a radial basis function network to approximate the inverse kinematics of a robot manipulator. This approach is distinct from existing approaches as the centres of the hidden-layer units are regularly distributed in the workspace, constrained training data is used and the training phase is performed using either the strict interpolation or the least mean square algorithms. An online retraining approach is also proposed to modify the network function approximation to cope with the situation where the initial training and application environments are different. Simulation results for two and three-link manipulators verify the approach. A novel real-time visual measurement system, based on a video camera and image processing software, has been developed to measure the position of the robotic manipulator in the three-dimensional workspace. Practical experiments have been performed with a Mitsubishi PA10-6CE manipulator and this visual measurement system. The performance of the radial basis function network is analysed for the manipulator operating in two and three-dimensional space and the practical results are compared to the simulation results. Advantages and disadvantages of the proposed approach are discussed

    3D Perception-based Collision-Free Robotic Leaf Probing for Automated Indoor Plant Phenotyping

    Get PDF
    Various instrumentation devices for plant physiology study such as spectrometer, chlorophyll fluorimeter, and Raman spectroscopy sensor require accurate placement of their sensor probes toward the leaf surface to meet specific requirements of probe-to-target distance and orientation. In this work, a Kinect V2 sensor, a high-precision 2D laser profilometer, and a six-axis robotic manipulator were used to automate the leaf probing task. The relatively wide field of view and high resolution of Kinect V2 allowed rapid capture of the full 3D environment in front of the robot. The location and size of each plant were estimated by k-means clustering where “k” was the user-defined number of plants. A real-time collision-free motion planning framework based on Probabilistic Roadmaps was adapted to maneuver the robotic manipulator without colliding with the plants. Each plant was scanned from the top with the short-range profilometer to obtain high-precision 3D point cloud data. Potential leaf clusters were extracted by a 3D region growing segmentation scheme. Each leaf segment was further partitioned into small patches by a Voxel Cloud Connectivity Segmentation method. Only the patches with low root mean square errors of plane fitting were used to compute leaf probing poses of the robot. Experiments conducted inside a growth chamber mock-up showed that the developed robotic leaf probing system achieved an average motion planning time of 0.4 seconds with an average end-effector travel distance of 1.0 meter. To examine the probing accuracy, a square surface was scanned at different angles, and its centroid was probed perpendicularly. The average absolute probing errors of distance and angle were 1.5 mm and 0.84 degrees, respectively. These results demonstrate the utility of the proposed robotic leaf probing system for automated non-contact deployment of spectroscopic sensor probes for indoor plant phenotyping under controlled environmental conditions

    Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction

    Full text link
    We present a novel approach for mobile manipulator self-calibration using contact information. Our method, based on point cloud registration, is applied to estimate the extrinsic transform between a fixed vision sensor mounted on a mobile base and an end effector. Beyond sensor calibration, we demonstrate that the method can be extended to include manipulator kinematic model parameters, which involves a non-rigid registration process. Our procedure uses on-board sensing exclusively and does not rely on any external measurement devices, fiducial markers, or calibration rigs. Further, it is fully automatic in the general case. We experimentally validate the proposed method on a custom mobile manipulator platform, and demonstrate centimetre-level post-calibration accuracy in positioning of the end effector using visual guidance only. We also discuss the stability properties of the registration algorithm, in order to determine the conditions under which calibration is possible.Comment: In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane, Australia, May 21-25, 201

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis
    corecore