1,211 research outputs found

    Panorama imaging for image-to-physical registration of narrow drill holes inside spongy bones

    Get PDF
    Image-to-physical registration based on volumetric data like computed tomography on the one side and intraoperative endoscopic images on the other side is an important method for various surgical applications. In this contribution, we present methods to generate panoramic views from endoscopic recordings for image-to-physical registration of narrow drill holes inside spongy bone. One core application is the registration of drill poses inside the mastoid during minimally invasive cochlear implantations. Besides the development of image processing software for registration, investigations are performed on a miniaturized optical system, achieving 360° radial imaging with one shot by extending a conventional, small, rigid, rod lens endoscope. A reflective cone geometry is used to deflect radially incoming light rays into the endoscope optics. Therefore, a cone mirror is mounted in front of a conventional 0° endoscope. Furthermore, panoramic images of inner drill hole surfaces in artificial bone material are created. Prior to drilling, cone beam computed tomography data is acquired from this artificial bone and simulated endoscopic views are generated from this data. A qualitative and quantitative image comparison of resulting views in terms of image-to-image registration is performed. First results show that downsizing of panoramic optics to a diameter of 3mm is possible. Conventional rigid rod lens endoscopes can be extended to produce suitable panoramic one-shot image data. Using unrolling and stitching methods, images of the inner drill hole surface similar to computed tomography image data of the same surface were created. Registration is performed on ten perturbations of the search space and results in target registration errors of (0:487 ± 0:438)mm at the entry point and (0:957 ± 0:948)mm at the exit as well as an angular error of (1:763 ± 1:536)°. The results show suitability of this image data for image-to-image registration. Analysis of the error components in different directions reveals a strong influence of the pattern structure, meaning higher diversity results into smaller errors. © 2017 SPIE

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, Innovación y Universidades | Ref. PID2019-108816RB-I0

    Synthesis and Validation of Vision Based Spacecraft Navigation

    Get PDF

    Optical measurement of shape and deformation fields on challenging surfaces

    Get PDF
    A multiple-sensor optical shape measurement system (SMS) based on the principle of white-light fringe projection has been developed and commercialised by Loughborough University and Phase Vision Ltd for over 10 years. The use of the temporal phase unwrapping technique allows precise and dense shape measurements of complex surfaces; and the photogrammetry-based calibration technique offers the ability to calibrate multiple sensors simultaneously in order to achieve 360° measurement coverage. Nevertheless, to enhance the applicability of the SMS in industrial environments, further developments are needed (i) to improve the calibration speed for quicker deployment, (ii) to broaden the application range from shape measurement to deformation field measurement, and (iii) to tackle practically-challenging surfaces of which specular components may disrupt the acquired data and result in spurious measurements. The calibration process typically requires manual positioning of an artefact (i.e., reference object) at many locations within the view of the sensors. This is not only timeconsuming but also complicated for an operator with average knowledge of metrology. This thesis introduces an automated artefact positioning system which enables automatic and optimised distribution of the artefacts, automatic prediction of their whereabouts to increase the artefact detection speed and robustness, and thereby greater overall calibration performance. This thesis also describes a novel technique that integrates the digital image correlation (DIC) technique into the present fringe projection SMS for the purpose of simultaneous shape and deformation field measurement. This combined technique offers three key advantages: (a) the ability to deal with geometrical discontinuities which are commonly present on mechanical surfaces and currently challenging to most deformation measurement methods, (b) the ability to measure 3D displacement fields with a basic single-camera single-projector SMS with no additional hardware components, and (c) the simple implementation on a multiple-sensor hardware platform to achieve complete coverage of large-scale and complex samples, with the resulting displacement fields automatically lying in a single global coordinate system. A displacement measurement iii accuracy of ≅1/12,000 of the measurement volume, which is comparable to that of an industry-standard DIC system, has been achieved. The applications of this novel technique to several structural tests of aircraft wing panels on-site at the research centre of Airbus UK in Filton are also presented. Mechanical components with shiny surface finish and complex geometry may introduce another challenge to present fringe projection techniques. In certain circumstances, multiple reflections of the projected fringes on an object surface may cause ambiguity in the phase estimation process and result in incorrect coordinate measurements. This thesis presents a new technique which adopts a Fourier domain ranging (FDR) method to correctly identifying multiple phase signals and enables unambiguous triangulation for a measured coordinate. Experiments of the new FDR technique on various types of surfaces have shown promising results as compared to the traditional phase unwrapping techniques

    Calibration of scanning laser range cameras with applications for machine vision

    Get PDF
    Range images differ from conventional reflectance images because they give direct 3-D information about a scene. The last five years have seen a substantial increase in the use of range imaging technology in the areas of robotics, hazardous materials handling, and manufacturing. This has been fostered by a cost reduction of reliable range scanning products, resulting primarily from advanced development of computing resources. In addition, the improved performance of modern range cameras has spurred an interest in new calibrations which take account of their unconventional design. Calibration implies both modeling and a numerical technique for finding parameters within the model. Researchers often refer to spherical coordinates when modeling range cameras. Spherical coordinates, however, only approximate the behavior of the cameras. We seek, therefore, a more analytical approach based on analysis of the internal scanning mechanisms of the cameras. This research demonstrates that the Householder matrix [14] is a better tool for modeling these devices. We develop a general calibration technique which is both accurate and simple to implement. The method proposed here compares target points taken from range images to the known geometry of the target. The calibration is considered complete if the two point sets can be made to match closely in a least squares sense by iteratively modifying model parameters. The literature, fortunately, is replete with numerical algorithms suited to this task. We have selected the simplex algorithm because it is particularly well suited for solving systems with many unknown parameters. In the course of this research, we implement the proposed calibration. We will find that the error in the range image data can be reduced from more that 60 mm per point rms to less than 10 mm per point. We consider this result to be a success because analysis of the results shows the residual error of 10 mm is due solely to random noise in the range values, not from calibration. This implies that accuracy is limited only by the quality of the range measuring device inside the camera

    Extrinsic calibration of a camera-robot system under non-holonomic constraints

    Get PDF
    A novel approach for the extrinsic calibration of a camera-robot system, i.e. the estimation of the pose of the camera with respect to the robot coordinate system, is presented. The method is based on the relative pose of a planar pattern as seen by the camera, estimated along with a predefined set of simple robot motions. This set has been generated so as to exploit the kinematic constraints imposed by the robot architecture and the relative pose between the pattern and the camera coordinate system. The resulting calibration procedure is very simple, making it suitable to be used in a broad range of applications. Experimental evaluations on both synthetic and real data demonstrate the validity of the proposed method.Sociedad Argentina de Informática e Investigación Operativ

    Vision based estimation, localization, and mapping for autonomous vehicles

    Get PDF
    In this dissertation, we focus on developing simultaneous localization and mapping (SLAM) algorithms with a robot-centric estimation framework primarily using monocular vision sensors. A primary contribution of this work is to use a robot-centric mapping framework concurrently with a world-centric localization method. We exploit the differential equation of motion of the normalized pixel coordinates of each point feature in the robot body frame. Another contribution of our work is to exploit a multiple-view geometry formulation with initial and current view projection of point features. We extract the features from objects surrounding the river and their reflections. The correspondences of the features are used along with the attitude and altitude information of the robot. We demonstrate that the observability of the estimation system is improved by applying our robot-centric mapping framework and multiple-view measurements. Using the robot-centric mapping framework and multiple-view measurements including reflection of features, we present a vision based localization and mapping algorithm that we developed for an unmanned aerial vehicle (UAV) flying in a riverine environment. Our algorithm estimates the 3D positions of point features along a river and the pose of the UAV. Our UAV is equipped with a lightweight monocular camera, an inertial measurement unit (IMU), a magnetometer, an altimeter, and an onboard computer. To our knowledge, we report the first result that exploits the reflections of features in a riverine environment for localization and mapping. We also present an omnidirectional vision based localization and mapping system for a lawn mowing robot. Our algorithm can detect whether the robotic mower is contained in a permitted area. Our robotic mower is modified with an omnidirectional camera, an IMU, a magnetometer, and a vehicle speed sensor. Here, we also exploit the robot-centric mapping framework. The estimator in our system generates a 3D point based map with landmarks. Concurrently, the estimator defines a boundary of the mowing area by using the estimated trajectory of the mower. The estimated boundary and the landmark map are provided for the estimation of the mowing location and for the containment detection. First, we derive a nonlinear observer with contraction analysis and pseudo-measurements of the depth of each landmark to prevent the map estimator from diverging. Of particular interest for this work is ensuring that the estimator for localization and mapping will not fail due to the nonlinearity of the system model. For batch estimation, we design a hybrid extended Kalman smoother for our localization and robot-centric mapping model. Finally, we present a single camera based SLAM algorithm using a convex optimization based nonlinear estimator. We validate the effectiveness of our algorithms through numerical simulations and outdoor experiments

    Visual Navigation for Robots in Urban and Indoor Environments

    Get PDF
    As a fundamental capability for mobile robots, navigation involves multiple tasks including localization, mapping, motion planning, and obstacle avoidance. In unknown environments, a robot has to construct a map of the environment while simultaneously keeping track of its own location within the map. This is known as simultaneous localization and mapping (SLAM). For urban and indoor environments, SLAM is especially important since GPS signals are often unavailable. Visual SLAM uses cameras as the primary sensor and is a highly attractive but challenging research topic. The major challenge lies in the robustness to lighting variation and uneven feature distribution. Another challenge is to build semantic maps composed of high-level landmarks. To meet these challenges, we investigate feature fusion approaches for visual SLAM. The basic rationale is that since urban and indoor environments contain various feature types such points and lines, in combination these features should improve the robustness, and meanwhile, high-level landmarks can be defined as or derived from these combinations. We design a novel data structure, multilayer feature graph (MFG), to organize five types of features and their inner geometric relationships. Building upon a two view-based MFG prototype, we extend the application of MFG to image sequence-based mapping by using EKF. We model and analyze how errors are generated and propagated through the construction of a two view-based MFG. This enables us to treat each MFG as an observation in the EKF update step. We apply the MFG-EKF method to a building exterior mapping task and demonstrate its efficacy. Two view based MFG requires sufficient baseline to be successfully constructed, which is not always feasible. Therefore, we further devise a multiple view based algorithm to construct MFG as a global map. Our proposed algorithm takes a video stream as input, initializes and iteratively updates MFG based on extracted key frames; it also refines robot localization and MFG landmarks using local bundle adjustment. We show the advantage of our method by comparing it with state-of-the-art methods on multiple indoor and outdoor datasets. To avoid the scale ambiguity in monocular vision, we investigate the application of RGB-D for SLAM.We propose an algorithm by fusing point and line features. We extract 3D points and lines from RGB-D data, analyze their measurement uncertainties, and compute camera motion using maximum likelihood estimation. We validate our method using both uncertainty analysis and physical experiments, where it outperforms the counterparts under both constant and varying lighting conditions. Besides visual SLAM, we also study specular object avoidance, which is a great challenge for range sensors. We propose a vision-based algorithm to detect planar mirrors. We derive geometric constraints for corresponding real-virtual features across images and employ RANSAC to develop a robust detection algorithm. Our algorithm achieves a detection accuracy of 91.0%
    corecore