1,721 research outputs found

    Optical Tracking System

    Get PDF
    The presented optical tracking system allowsintuitive controlling and programming of industrial robots bydemonstration. The system is engineered with low costcomponents. Using an active marker (IR-LEDs) in combinationwith a stereo vision configuration of the camera system and theselection of suitable algorithms for the process chain of the imageprocessing a positioning accuracy in the range of millimeters hasbeen achieved. The communication between the tracking systemand the robot is realized by using the TCP/IP protocol via anEthernet connection

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Applying close range digital photogrammetry in soil erosion studies

    Get PDF
    Soil erosion due to rainfall and overland flow is a significant environmental problem. Studying the phenomenon requires accurate high-resolution measurements of soil surface topography and morphology. Close range digital photogrammetry with an oblique convergent configuration is proposed in this paper as a useful technique for such measurements, in the context of a flume-scale experimental study. The precision of the technique is assessed by comparing triangulation solutions and the resulting DEMs with varying tie point distributions and control point measurements, as well as by comparing DEMs extracted from different images of the same surface. Independent measurements were acquired using a terrestrial laser scanner for comparison with a DEM derived from photogrammetry. The results point to the need for a stronger geometric configuration to improve precision. They also suggest that the camera lens models were not fully adequate for the large object depths in this study. Nevertheless, the photogrammetric output can provide useful topographical information for soil erosion studies, provided limitations of the technique are duly considered

    Photogrammetric 3D model via smartphone GNSS sensor. Workflow, error estimate, and best practices

    Get PDF
    Geotagged smartphone photos can be employed to build digital terrain models using structure from motion-multiview stereo (SfM-MVS) photogrammetry. Accelerometer, magnetometer, and gyroscope sensors integrated within consumer-grade smartphones can be used to record the orientation of images, which can be combined with location information provided by inbuilt global navigation satellite system (GNSS) sensors to geo-register the SfM-MVS model. The accuracy of these sensors is, however, highly variable. In this work, we use a 200 m-wide natural rocky cliff as a test case to evaluate the impact of consumer-grade smartphone GNSS sensor accuracy on the registration of SfM-MVS models. We built a high-resolution 3D model of the cliff, using an unmanned aerial vehicle (UAV) for image acquisition and ground control points (GCPs) located using a differential GNSS survey for georeferencing. This 3D model provides the benchmark against which terrestrial SfM-MVS photogrammetry models, built using smartphone images and registered using built-in accelerometer/gyroscope and GNSS sensors, are compared. Results show that satisfactory post-processing registrations of the smartphone models can be attained, requiring: (1) wide acquisition areas (scaling with GNSS error) and (2) the progressive removal of misaligned images, via an iterative process of model building and error estimation

    Stable Camera Motion Estimation Using Convex Programming

    Full text link
    We study the inverse problem of estimating n locations t1,...,tnt_1, ..., t_n (up to global scale, translation and negation) in RdR^d from noisy measurements of a subset of the (unsigned) pairwise lines that connect them, that is, from noisy measurements of ±(titj)/titj\pm (t_i - t_j)/\|t_i - t_j\| for some pairs (i,j) (where the signs are unknown). This problem is at the core of the structure from motion (SfM) problem in computer vision, where the tit_i's represent camera locations in R3R^3. The noiseless version of the problem, with exact line measurements, has been considered previously under the general title of parallel rigidity theory, mainly in order to characterize the conditions for unique realization of locations. For noisy pairwise line measurements, current methods tend to produce spurious solutions that are clustered around a few locations. This sensitivity of the location estimates is a well-known problem in SfM, especially for large, irregular collections of images. In this paper we introduce a semidefinite programming (SDP) formulation, specially tailored to overcome the clustering phenomenon. We further identify the implications of parallel rigidity theory for the location estimation problem to be well-posed, and prove exact (in the noiseless case) and stable location recovery results. We also formulate an alternating direction method to solve the resulting semidefinite program, and provide a distributed version of our formulation for large numbers of locations. Specifically for the camera location estimation problem, we formulate a pairwise line estimation method based on robust camera orientation and subspace estimation. Lastly, we demonstrate the utility of our algorithm through experiments on real images.Comment: 40 pages, 12 figures, 6 tables; notation and some unclear parts updated, some typos correcte

    Vision Guided Robot Gripping Systems

    Get PDF

    PANORAMA IMAGE SETS FOR TERRESTRIAL PHOTOGRAMMETRIC SURVEYS

    Get PDF
    High resolution 3D models produced from photographs acquired with consumer-grade cameras are becoming increasingly common in the fields of geosciences. However, the quality of an image-based 3D model depends on the planning of the photogrammetric surveys. This means that the geometric configuration of the multi-view camera network and the control data have to be designed in accordance with the required accuracy, resolution and completeness. From a practical application point of view, a proper planning (of both photos and control data) of the photogrammetric survey especially for terrestrial acquisition, is not always ensured due to limited accessibility of the target object and the presence of occlusions. To solve these problems, we propose a different image acquisition strategy and we test different geo-referencing scenarios to deal with the practical issues of a terrestrial photogrammetric survey. The proposed photogrammetric survey procedure is based on the acquisition of a sequence of images in panorama mode by rotating the camera on a standard tripod. The offset of the pivot point from the projection center prevents the stitching of these images into a panorama. We demonstrate how to still take advantage of this capturing mode. The geo-referencing investigation consists of testing the use of directly observed coordinates of the camera positions, different ground control point (GCP) configurations, and GCPs with different accuracies, i.e. artificial targets vs. natural features. Images of the test field in a low-slope hill were acquired from the ground using an SLR camera. To validate the photogrammetric results a terrestrial laser scanner survey is used as benchmark

    Single and multiple stereo view navigation for planetary rovers

    Get PDF
    © Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment

    Mixed marker-based/marker-less visual odometry system for mobile robots

    Get PDF
    When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test

    Neuronal Specialization for Fine-Grained Distance Estimation using a Real-Time Bio-Inspired Stereo Vision System

    Get PDF
    The human binocular system performs very complex operations in real-time tasks thanks to neuronal specialization and several specialized processing layers. For a classic computer vision system, being able to perform the same operation requires high computational costs that, in many cases, causes it to not work in real time: this is the case regarding distance estimation. This work details the functionality of the biological processing system, as well as the neuromorphic engineering research branch—the main purpose of which is to mimic neuronal processing. A distance estimation system based on the calculation of the binocular disparities with specialized neuron populations is developed. This system is characterized by several tests and executed in a real-time environment. The response of the system proves the similarity between it and human binocular processing. Further, the results show that the implemented system can work in a real-time environment, with a distance estimation error of 15% (8% for the characterization tests).Ministerio de Ciencia, Innovación y Universidades TEC2016-77785-
    corecore