124,110 research outputs found

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes

    Full text link
    In this paper we address the problem of multiple camera calibration in the presence of a homogeneous scene, and without the possibility of employing calibration object based methods. The proposed solution exploits salient features present in a larger field of view, but instead of employing active vision we replace the cameras with stereo rigs featuring a long focal analysis camera, as well as a short focal registration camera. Thus, we are able to propose an accurate solution which does not require intrinsic variation models as in the case of zooming cameras. Moreover, the availability of the two views simultaneously in each rig allows for pose re-estimation between rigs as often as necessary. The algorithm has been successfully validated in an indoor setting, as well as on a difficult scene featuring a highly dense pilgrim crowd in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application

    Optimal Radiometric Calibration for Camera-Display Communication

    Full text link
    We present a novel method for communicating between a camera and display by embedding and recovering hidden and dynamic information within a displayed image. A handheld camera pointed at the display can receive not only the display image, but also the underlying message. These active scenes are fundamentally different from traditional passive scenes like QR codes because image formation is based on display emittance, not surface reflectance. Detecting and decoding the message requires careful photometric modeling for computational message recovery. Unlike standard watermarking and steganography methods that lie outside the domain of computer vision, our message recovery algorithm uses illumination to optically communicate hidden messages in real world scenes. The key innovation of our approach is an algorithm that performs simultaneous radiometric calibration and message recovery in one convex optimization problem. By modeling the photometry of the system using a camera-display transfer function (CDTF), we derive a physics-based kernel function for support vector machine classification. We demonstrate that our method of optimal online radiometric calibration (OORC) leads to an efficient and robust algorithm for computational messaging between nine commercial cameras and displays.Comment: 10 pages, Submitted to CVPR 201

    Calibration with concurrent PT axes

    Get PDF
    The introduction of active (pan-tilt-zoom or PTZ) cameras in Smart Rooms in addition to fixed static cameras allows to improve resolution in volumetric reconstruction, adding the capability to track smaller objects with higher precision in actual 3D world coordinates. To accomplish this goal, precise camera calibration data should be available for any pan, tilt, and zoom settings of each PTZ camera. The PTZ calibration method proposed in this paper introduces a novel solution to the problem of computing extrinsic and intrinsic parameters for active cameras. We first determine the rotation center of the camera expressed under an arbitrary world coordinate origin. Then, we obtain an equation relating any rotation of the camera with the movement of the principal point to define extrinsic parameters for any value of pan and tilt. Once this position is determined, we compute how intrinsic parameters change as a function of zoom. We validate our method by evaluating the re-projection error and its stability for points inside and outside the calibration set.Postprint (published version

    Intraoperative Endoscopic Augmented Reality in Third Ventriculostomy

    Get PDF
    In neurosurgery, as a result of the brain-shift, the preoperative patient models used as a intraoperative reference change. A meaningful use of the preoperative virtual models during the operation requires for a model update. The NEAR project, Neuroendoscopy towards Augmented Reality, describes a new camera calibration model for high distorted lenses and introduces the concept of active endoscopes endowed with with navigation, camera calibration, augmented reality and triangulation modules

    Efficient generic calibration method for general cameras with single centre of projection

    Get PDF
    Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed with the goal of generality and it is therefore sub-optimal for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes novel improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A new linear estimation stage to the generic algorithm is proposed incorporating classical pinhole calibration techniques, and it is shown to be significantly more accurate than the linear estimation stage of the standard method. A linear method for pose estimation is also proposed and evaluated against the existing polynomial method. Distortion correction and motion reconstruction experiments are conducted with real data for a hyperboloidal catadioptric sensor for both the standard and proposed methods. Results show the accuracy and robustness of the proposed method to be superior to those of the standard method

    How to turn your camera into a perfect pinhole model

    Full text link
    Camera calibration is a first and fundamental step in various computer vision applications. Despite being an active field of research, Zhang's method remains widely used for camera calibration due to its implementation in popular toolboxes. However, this method initially assumes a pinhole model with oversimplified distortion models. In this work, we propose a novel approach that involves a pre-processing step to remove distortions from images by means of Gaussian processes. Our method does not need to assume any distortion model and can be applied to severely warped images, even in the case of multiple distortion sources, e.g., a fisheye image of a curved mirror reflection. The Gaussian processes capture all distortions and camera imperfections, resulting in virtual images as though taken by an ideal pinhole camera with square pixels. Furthermore, this ideal GP-camera only needs one image of a square grid calibration pattern. This model allows for a serious upgrade of many algorithms and applications that are designed in a pure projective geometry setting but with a performance that is very sensitive to nonlinear lens distortions. We demonstrate the effectiveness of our method by simplifying Zhang's calibration method, reducing the number of parameters and getting rid of the distortion parameters and iterative optimization. We validate by means of synthetic data and real world images. The contributions of this work include the construction of a virtual ideal pinhole camera using Gaussian processes, a simplified calibration method and lens distortion removal.Comment: 15 pages, 3 figures, conference CIAR

    Development of a calibration pipeline for a monocular-view structured illumination 3D sensor utilizing an array projector

    Get PDF
    Commercial off-the-shelf digital projection systems are commonly used in active structured illumination photogrammetry of macro-scale surfaces due to their relatively low cost, accessibility, and ease of use. They can be described as inverse pinhole modelled. The calibration pipeline of a 3D sensor utilizing pinhole devices in a projector-camera setup configuration is already well-established. Recently, there have been advances in creating projection systems offering projection speeds greater than that available from conventional off-the-shelf digital projectors. However, they cannot be calibrated using well established techniques based on the pinole assumption. They are chip-less and without projection lens. This work is based on the utilization of unconventional projection systems known as array projectors which contain not one but multiple projection channels that project a temporal sequence of illumination patterns. None of the channels implement a digital projection chip or a projection lens. To workaround the calibration problem, previous realizations of a 3D sensor based on an array projector required a stereo-camera setup. Triangulation took place between the two pinhole modelled cameras instead. However, a monocular setup is desired as a single camera configuration results in decreased cost, weight, and form-factor. This study presents a novel calibration pipeline that realizes a single camera setup. A generalized intrinsic calibration process without model assumptions was developed that directly samples the illumination frustum of each array projection channel. An extrinsic calibration process was then created that determines the pose of the single camera through a downhill simplex optimization initialized by particle swarm. Lastly, a method to store the intrinsic calibration with the aid of an easily realizable calibration jig was developed for re-use in arbitrary measurement camera positions so that intrinsic calibration does not have to be repeated

    Calibration of an active vision system and feature tracking based on 8-point projective invariants.

    Get PDF
    by Chen Zhi-Yi.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references.List of Symbols S --- p.1Chapter Chapter 1 --- IntroductionChapter 1.1 --- Active Vision Paradigm and Calibration of Active Vision System --- p.1.1Chapter 1.1.1 --- Active Vision Paradigm --- p.1.1Chapter 1.1.2 --- A Review of the Existing Active Vision Systems --- p.1.1Chapter 1.1.3 --- A Brief Introduction to Our Active Vision System --- p.1.2Chapter 1.1.4 --- The Stages of Calibrating an Active Vision System --- p.1.3Chapter 1.2 --- Projective Invariants and Their Applications to Feature Tracking --- p.1.4Chapter 1.3 --- Thesis Overview --- p.1.4References --- p.1.5Chapter Chapter 2 --- Calibration for an Active Vision System: Camera CalibrationChapter 2.1 --- An Overview of Camera Calibration --- p.2.1Chapter 2.2 --- Tsai's RAC Based Camera Calibration Method --- p.2.5Chapter 2.2.1 --- The Pinhole Camera Model with Radial Distortion --- p.2.7Chapter 2.2.2 --- Calibrating a Camera Using Mono view Noncoplanar Points --- p.2.10Chapter 2.3 --- Reg Willson's Implementation of R. Y. Tsai's RAC Based Camera Calibration Algorithm --- p.2.15Chapter 2.4 --- Experimental Setup and Procedures --- p.2.20Chapter 2.5 --- Experimental Results --- p.2.23Chapter 2.6 --- Conclusion --- p.2.28References --- p.2.29Chapter Chapter 3 --- Calibration for an Active Vision System: Head-Eye CalibrationChapter 3.1 --- Why Head-Eye Calibration --- p.3.1Chapter 3.2 --- Review of the Existing Head-Eye Calibration Algorithms --- p.3.1Chapter 3.2.1 --- Category I Classic Approaches --- p.3.1Chapter 3.2.2 --- Category II Self-Calibration Techniques --- p.3.2Chapter 3.3 --- R.Tsai's Approach for Hand-Eye (Head-Eye) Calibration --- p.3.3Chapter 3.3.1 --- Introduction --- p.3.3Chapter 3.3.2 --- Definitions of Coordinate Frames and Homogeoeous Transformation Matrices --- p.3.3Chapter 3.3.3 --- Formulation of the Head-Eye Calibration Problem --- p.3.6Chapter 3.3.4 --- Using Principal Vector to Represent Rotation Transformation Matrix --- p.3.7Chapter 3.3.5 --- Calculating R cg and Tcg --- p.3.9Chapter 3.4 --- Our Local Implementation of Tsai's Head Eye Calibration Algorithm --- p.3.14Chapter 3.4.1 --- Using Denavit - Hartternberg's Approach to Establish a Body-Attached Coordinate Frame for Each Link of the Manipulator --- p.3.16Chapter 3.5 --- Function of Procedures and Formats of Data Files --- p.3.23Chapter 3.6 --- Experimental Results --- p.3.26Chapter 3.7 --- Discussion --- p.3.45Chapter 3.8 --- Conclusion --- p.3.46References --- p.3.47Appendix I Procedures --- p.3.48Chapter Chapter 4 --- A New Tracking Method for Shape from Motion Using an Active Vision SystemChapter 4.1 --- Introduction --- p.4.1Chapter 4.2 --- A New Tracking Method --- p.4.1Chapter 4.2.1 --- Our approach --- p.4.1Chapter 4.2.2 --- Using an Active Vision System to Track the Projective Basis Across Image Sequence --- p.4.2Chapter 4.2.3 --- Using Projective Invariants to Track the Remaining Feature Points --- p.4.2Chapter 4.3 --- Using Factorisation Method to Recover Shape from Motion --- p.4.11Chapter 4.4 --- Discussion and Future Research --- p.4.31References --- p.4.32Chapter Chapter 5 --- Experiments on Feature Tracking with 3D Projective InvariantsChapter 5.1 --- 8-point Projective Invariant --- p.5.1Chapter 5.2 --- Projective Invariant Based Tranfer between Distinct Views of a 3-D Scene --- p.5.4Chapter 5.3 --- Transfer Experiments on the Image Sequence of an Calibration Block --- p.5.6Chapter 5.3.1 --- Experiment 1. Real Image Sequence 1 of a Camera Calibration Block --- p.5.6Chapter 5.3.2 --- Experiment 2. Real Image Sequence 2 of a Camera Calibration Block --- p.5.15Chapter 5.3.3 --- Experiment 3. Real Image Sequence 3 of a Camera Calibration Block --- p.5.22Chapter 5.3.4 --- Experiment 4. Synthetic Image Sequence of a Camera Calibration Block --- p.5.27Chapter 5.3.5 --- Discussions on the Experimental Results --- p.5.32Chapter 5.4 --- Transfer Experiments on the Image Sequence of a Human Face Model --- p.5.33References --- p.5.44Chapter Chapter 6 --- Conclusions and Future ResearchesChapter 6.1 --- Contributions and Conclusions --- p.6.1Chapter 6.2 --- Future Researches --- p.6.1Bibliography --- p.B.
    corecore