13 research outputs found

    Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features

    No full text

    Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    No full text
    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate

    Workshop on computational photography and low-level vision

    No full text
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)7728 LNCSPART 1VIII

    Globally Optimal Relative Pose Estimation for Camera on a Selfie Stick

    No full text

    Structure-From-Motion in 3D Space Using 2D Lidars

    No full text
    This paper presents a novel structure-from-motion methodology using 2D lidars (Light Detection And Ranging). In 3D space, 2D lidars do not provide sufficient information for pose estimation. For this reason, additional sensors have been used along with the lidar measurement. In this paper, we use a sensor system that consists of only 2D lidars, without any additional sensors. We propose a new method of estimating both the 6D pose of the system and the surrounding 3D structures. We compute the pose of the system using line segments of scan data and their corresponding planes. After discarding the outliers, both the pose and the 3D structures are refined via nonlinear optimization. Experiments with both synthetic and real data show the accuracy and robustness of the proposed method

    Autonomous homing based on laser-camera fusion system

    No full text
    Building maps of unknown environments is a critical factor for autonomous navigation and homing, and this problem is especially challenging in large-scale environments. Recently, sensor fusion systems such as combinations of cameras and laser sensors have become popular in the effort to ensure a general level of performance in this task. In this paper, we present a new homing method in a large-scale environment using a laser-camera fusion system. Instead of fusing data to form a single map builder, we adaptively select sensor data to handle environments which contain ambiguity. For autonomous homing, we propose a new mapping strategy for building a hybrid map and a return strategy for selecting the next target waypoints efficiently. The experimental results demonstrate that the proposed algorithm enables the autonomous homing of a robot in a large-scale indoor environments in real time. © 2012 IEEE.1

    Depth from a Light Field Image with Learning-based Matching Costs

    No full text
    One of the core applications of light field imaging is depth estimation. To acquire a depth map, existing approaches apply a single photo-consistency measure to an entire light field. However, this is not an optimal choice because of the non-uniform light field degradations produced by limitations in the hardware design. In this paper, we introduce a pipeline that automatically determines the best configuration for photo-consistency measure, which leads to the most reliable depth label from the light field. We analyzed the practical factors affecting degradation in lenslet light field cameras, and designed a learning based framework that can retrieve the best cost measure and optimal depth label. To enhance the reliability of our method, we augmented an existing light field benchmark to simulate realistic source dependent noise, aberrations, and vignetting artifacts. The augmented dataset was used for the training and validation of the proposed approach. Our method was competitive with several state-of-the-art methods for the benchmark and real-world light field datasets.11Nsciescopu

    Depth from a Light Field Image with Learning-Based Matching Costs

    No full text
    corecore