79 research outputs found

    Pool testing of AUV visual servoing for autonomous inspection

    Full text link

    Accurate Single Image Multi-Modal Camera Pose Estimation

    Get PDF
    Abstract. A well known problem in photogrammetry and computer vision is the precise and robust determination of camera poses with respect to a given 3D model. In this work we propose a novel multi-modal method for single image camera pose estimation with respect to 3D models with intensity information (e.g., LiDAR data with reflectance information). We utilize a direct point based rendering approach to generate synthetic 2D views from 3D datasets in order to bridge the dimensionality gap. The proposed method then establishes 2D/2D point and local region correspondences based on a novel self-similarity distance measure. Correct correspondences are robustly identified by searching for small regions with a similar geometric relationship of local self-similarities using a Generalized Hough Transform. After backprojection of the generated features into 3D a standard Perspective-n-Points problem is solved to yield an initial camera pose. The pose is then accurately refined using an intensity based 2D/3D registration approach. An evaluation on Vis/IR 2D and airborne and terrestrial 3D datasets shows that the proposed method is applicable to a wide range of different sensor types. In addition, the approach outperforms standard global multi-modal 2D/3D registration approaches based on Mutual Information with respect to robustness and speed. Potential applications are widespread and include for instance multispectral texturing of 3D models, SLAM applications, sensor data fusion and multi-spectral camera calibration and super-resolution applications

    Efficient 3D Tracking for Motion Compensation in Beating Heart Surgery

    Full text link
    International audienceThe design of physiological motion compensation systems for robotic-assisted cardiac Minimally Invasive Surgery (MIS) is a challeng- ing research topic. In this domain, vision-based techniques have proven to be a practical way to retrieve the motion of the beating heart. However due to the complexity of the heart motion and its surface characteristics, efficient tracking is still a complicated task. In this paper, we propose an algorithm for tracking the 3D motion of the beating heart, based on a Thin-Plate Splines (TPS) parametric model. The novelty of our approach lies in that no explicit matching between the stereo camera images is re- quired and consequently no intermediate steps such as rectification are needed. Experiments conducted on ex-vivo and in-vivo tissue show the effectiveness of the proposed algorithm for tracking surfaces undergoing complex deformations

    Data-Driven Visual Tracking in Retinal Microsurgery

    Get PDF
    In the context of retinal microsurgery, visual tracking of instruments is a key component of robotics assistance. The difficulty of the task and major reason why most existing strategies fail on {\it in-vivo} image sequences lies in the fact that complex and severe changes in instrument appearance are challenging to model. This paper introduces a novel approach, that is both data-driven and complementary to existing tracking techniques. In particular, we show how to learn and integrate an accurate detector with a simple gradient-based tracker within a robust pipeline which runs at framerate. In addition, we present a fully annotated dataset of retinal instruments in {\it in-vivo} surgeries, which we use to quantitatively validate our approach. We also demonstrate an application of our method in a laparoscopy image sequence

    Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments

    Get PDF
    RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.United States. Office of Naval Research (Grant MURI N00014-07-1-0749)United States. Office of Naval Research (Science of Autonomy Program N00014-09-1-0641)United States. Army Research Office (MAST CTA)United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-09-1-1052)National Science Foundation (U.S.) (Contract IIS-0812671)United States. Army Research Office (Robotics Consortium Agreement W911NF-10-2-0016)National Science Foundation (U.S.). Division of Information, Robotics, and Intelligent Systems (Grant 0546467

    Self-Calibration of the Distortion of a Zooming Camera by Matching Points at Different Resolutions

    No full text
    This paper presents a new method for the selfcalibration of the lens distortion of a zooming camera which appears for short focal lengths. The proposed technique does not need any special calibration pattern nor any prior knowledge about the environment. The key idea is to match points between a distorted image and an undistorted image taken at different resolutions. A new method for automatically matching points in the two images is proposed. The scale factor between the images is not needed for the matching algorithm. Matched points are used to compute invariants to the pinhole camera parameters. Then, lens distortion parameters are estimated in order to obtain the same invariants in both images. This approach is well suited to autonomous robotic vision applications. In fact, the self-calibration of the camera is done before moving the robot. Experiment with ground truth and tests on real images provide good results

    A

    No full text
    unified approach to visual tracking and servoin

    Efficient visual hull computation for realtime 3D reconstruction using CUDA

    No full text
    In this paper we present two efficient GPU-based visual hull computation algorithms. We compare them in terms of performance using image sets of varying size and different voxel resolutions. In addition, we present a real-time 3D reconstruction system which uses the proposed GPU-based reconstruction method to achieve real-time performance (30 fps) using 16 cameras and 4 PCs
    corecore