507 research outputs found

    Performance Evaluation of Vision-Based Algorithms for MAVs

    Get PDF
    An important focus of current research in the field of Micro Aerial Vehicles (MAVs) is to increase the safety of their operation in general unstructured environments. Especially indoors, where GPS cannot be used for localization, reliable algorithms for localization and mapping of the environment are necessary in order to keep an MAV airborne safely. In this paper, we compare vision-based real-time capable methods for localization and mapping and point out their strengths and weaknesses. Additionally, we describe algorithms for state estimation, control and navigation, which use the localization and mapping results of our vision-based algorithms as input.Comment: Presented at OAGM Workshop, 2015 (arXiv:1505.01065

    Depth Super-Resolution Meets Uncalibrated Photometric Stereo

    Full text link
    A novel depth super-resolution approach for RGB-D sensors is presented. It disambiguates depth super-resolution through high-resolution photometric clues and, symmetrically, it disambiguates uncalibrated photometric stereo through low-resolution depth cues. To this end, an RGB-D sequence is acquired from the same viewing angle, while illuminating the scene from various uncalibrated directions. This sequence is handled by a variational framework which fits high-resolution shape and reflectance, as well as lighting, to both the low-resolution depth measurements and the high-resolution RGB ones. The key novelty consists in a new PDE-based photometric stereo regularizer which implicitly ensures surface regularity. This allows to carry out depth super-resolution in a purely data-driven manner, without the need for any ad-hoc prior or material calibration. Real-world experiments are carried out using an out-of-the-box RGB-D sensor and a hand-held LED light source.Comment: International Conference on Computer Vision (ICCV) Workshop, 201

    Reliable fusion of ToF and stereo depth driven by confidence measures

    Get PDF
    In this paper we propose a framework for the fusion of depth data produced by a Time-of-Flight (ToF) camera and stereo vision system. Initially, depth data acquired by the ToF camera are upsampled by an ad-hoc algorithm based on image segmentation and bilateral filtering. In parallel a dense disparity map is obtained using the Semi- Global Matching stereo algorithm. Reliable confidence measures are extracted for both the ToF and stereo depth data. In particular, ToF confidence also accounts for the mixed-pixel effect and the stereo confidence accounts for the relationship between the pointwise matching costs and the cost obtained by the semi-global optimization. Finally, the two depth maps are synergically fused by enforcing the local consistency of depth data accounting for the confidence of the two data sources at each location. Experimental results clearly show that the proposed method produces accurate high resolution depth maps and outperforms the compared fusion algorithms

    Observations of Detailed Structure in the Solar Wind at 1 AU with STEREO/HI-2

    Full text link
    Heliospheric imagers offer the promise of remote sensing of large-scale structures present in the solar wind. The STEREO/HI-2 imagers, in particular, offer high resolution, very low noise observations of the inner heliosphere but have not yet been exploited to their full potential. This is in part because the signal of interest, Thomson scattered sunlight from free electrons, is ~1000 times fainter than the background visual field in the images, making background subtraction challenging. We have developed a procedure for separating the Thomson-scattered signal from the other background/foreground sources in the HI-2 data. Using only the Level 1 data from STEREO/HI-2, we are able to generate calibrated imaging data of the solar wind with sensitivity of a few times 1e-17 Bsun, compared to the background signal of a few times 1e-13 Bsun. These images reveal detailed spatial structure in CMEs and the solar wind at projected solar distances in excess of 1 AU, at the instrumental motion-blur resolution limit of 1-3 degree. CME features visible in the newly reprocessed data from December 2008 include leading-edge pileup, interior voids, filamentary structure, and rear cusps. "Quiet" solar wind features include V shaped structure centered on the heliospheric current sheet, plasmoids, and "puffs" that correspond to the density fluctuations observed in-situ. We compare many of these structures with in-situ features detected near 1 AU. The reprocessed data demonstrate that it is possible to perform detailed structural analyses of heliospheric features with visible light imagery, at distances from the Sun of at least 1 AU.Comment: Accepted by Astrophysical Journa

    Cross-calibration of Time-of-flight and Colour Cameras

    Get PDF
    Time-of-flight cameras provide depth information, which is complementary to the photometric appearance of the scene in ordinary images. It is desirable to merge the depth and colour information, in order to obtain a coherent scene representation. However, the individual cameras will have different viewpoints, resolutions and fields of view, which means that they must be mutually calibrated. This paper presents a geometric framework for this multi-view and multi-modal calibration problem. It is shown that three-dimensional projective transformations can be used to align depth and parallax-based representations of the scene, with or without Euclidean reconstruction. A new evaluation procedure is also developed; this allows the reprojection error to be decomposed into calibration and sensor-dependent components. The complete approach is demonstrated on a network of three time-of-flight and six colour cameras. The applications of such a system, to a range of automatic scene-interpretation problems, are discussed.Comment: 18 pages, 12 figures, 3 table
    • …
    corecore