54,837 research outputs found

    Methods for a fusion of Optical Coherence Tomography and stereo camera image data

    Get PDF
    This work investigates combination of Optical Coherence Tomography and two cameras, observing a microscopic scene. Stereo vision provides realistic images, but is limited in terms of penetration depth. Optical Coherence Tomography (OCT) enables access to subcutaneous structures, but 3D-OCT volume data do not give the surgeon a familiar view. The extension of the stereo camera setup with OCT imaging combines the benefits of both modalities. In order to provide the surgeon with a convenient integration of OCT into the vision interface, we present an automated image processing analysis of OCT and stereo camera data as well as combined imaging as augmented reality visualization. Therefore, we care about OCT image noise, perform segmentation as well as develop proper registration objects and methods. The registration between stereo camera and OCT results in a Root Mean Square error of 284 μm as average of five measurements. The presented methods are fundamental for fusion of both imaging modalities. Augmented reality is shown as application of the results. Further developments lead to fused visualization of subcutaneous structures, as information of OCT images, into stereo vision. © 2015 SPIE

    Mosaics from arbitrary stereo video sequences

    Get PDF
    lthough mosaics are well established as a compact and non-redundant representation of image sequences, their application still suffers from restrictions of the camera motion or has to deal with parallax errors. We present an approach that allows construction of mosaics from arbitrary motion of a head-mounted camera pair. As there are no parallax errors when creating mosaics from planar objects, our approach first decomposes the scene into planar sub-scenes from stereo vision and creates a mosaic for each plane individually. The power of the presented mosaicing technique is evaluated in an office scenario, including the analysis of the parallax error

    Computer Vision and Image Understanding xxx

    Get PDF
    Abstract 13 This paper presents a panoramic virtual stereo vision approach to the problem of detecting 14 and localizing multiple moving objects (e.g., humans) in an indoor scene. Two panoramic 15 cameras, residing on different mobile platforms, compose a virtual stereo sensor with a flexible 16 baseline. A novel ''mutual calibration'' algorithm is proposed, where panoramic cameras on 17 two cooperative moving platforms are dynamically calibrated by looking at each other. A de-18 tailed numerical analysis of the error characteristics of the panoramic virtual stereo vision 19 (mutual calibration error, stereo matching error, and triangulation error) is given to derive 20 rules for optimal view planning. Experimental results are discussed for detecting and localizing 21 multiple humans in motion using two cooperative robot platforms. 2

    Analysis of Performance of Stereoscopic-Vision Software

    Get PDF
    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel

    Evaluation of CNN-Based Human Pose Estimation for Body Segment Lengths Assessment

    Get PDF
    Human pose estimation (HPE) methods based on convolutional neural networks (CNN) have demonstrated significant progress and achieved state-of-the-art results on human pose datasets. In this study, we aimed to assess the perfor-mance of CNN-based HPE methods for measuring anthropometric data. A Vicon motion analysis system as the reference system and a stereo vision system recorded ten asymptomatic subjects standing in front of the stereo vision system in a static posture. Eight HPE methods estimated the 2D poses which were transformed to the 3D poses by using the stereo vision system. Percentage of correct keypoints, 3D error, and absolute error of the body segment lengths are the evaluation measures which were used to assess the results. Percentage of correct keypoints – the stand-ard metric for 2D pose estimation – showed that the HPE methods could estimate the 2D body joints with a minimum accuracy of 99%. Meanwhile, the average 3D error and absolute error for the body segment lengths are 5 cm

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    Get PDF
    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance

    Design and Implementation of the University of Maryland Keck Laboratory for the Analysis of Visual Movement

    Get PDF
    The Keck Laboratory for the Analysis of Vision Motion is a state-of-the art multi-perspective imaging laboratory recently established at the University of Maryland. In this paper, we describe the design and architecture of the lab, that is currently being used to support many computer vision studies. In particular, we discuss: camera synchronization, image resolution analysis, image noise analysis, stereo error analysis, video capture, lighting, calibration hardware. (Also UMIACS-TR-2002-11

    Calibration pattern detection and feature points coordinates determination

    Get PDF
    An algorithm of calibration pattern’s feature points detection was proposed for stereo vision systems calibration tasks. The principle of this work is the analysis of the form of feature points and their spatial arrangement. Test results shows a significant detection in conditions of noise and rotation relative to the camera in a wide range (up to 45 degrees) without loss of accuracy. That is achieved by the following detailed calculations and statistical analysis (geometric or spatial) of feature points. When using the pixel accuracy of the coordinates of points of the calibration pattern in the plane of projection proposed algorithm provides an average error of 1.2 pixel
    corecore