266,799 research outputs found

    Direct Monocular Odometry Using Points and Lines

    Full text link
    Most visual odometry algorithm for a monocular camera focuses on points, either by feature matching, or direct alignment of pixel intensity, while ignoring a common but important geometry entity: edges. In this paper, we propose an odometry algorithm that combines points and edges to benefit from the advantages of both direct and feature based methods. It works better in texture-less environments and is also more robust to lighting changes and fast motion by increasing the convergence basin. We maintain a depth map for the keyframe then in the tracking part, the camera pose is recovered by minimizing both the photometric error and geometric error to the matched edge in a probabilistic framework. In the mapping part, edge is used to speed up and increase stereo matching accuracy. On various public datasets, our algorithm achieves better or comparable performance than state-of-the-art monocular odometry methods. In some challenging texture-less environments, our algorithm reduces the state estimation error over 50%.Comment: ICRA 201

    Keyframe-based monocular SLAM: design, survey, and future directions

    Get PDF
    Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery

    The Application of Fourier Transform in the Interpretation of Subsurface Stratigraphy

    Get PDF
    General seismic data interpretation involves direct fault and horizon mapping, sequence stratigraphy and seismic modeling to produce structural, stratigraphic and reservoir maps for the delineation, exploration and production of hydrocarbon in oil fields. The first two methods operate on stacked and migrated data, while the third is done without adequate calibration, inadequate display of final stacks, coarse processing and in time domain. Actual hydrocarbon entrapments are rarely detailed well enough to permit reliable location of wells from these studies alone owing to inherent noise. This paper presents the results of the application of time-frequency transform on 3D seismic data over an oil field in Niger Delta. The aim of the study was to develop a robust technique for mapping subtle stratigraphic units which are usually masked after normal data interpretation using spectral algorithm. The discrete Fourier transform applied in the interpretation of the 3D seismic data filters the field data recorded in time, and recovers lost sub-seismic geologic information content in frequency. The algorithm is based on fast Fourier transform technique and was developed within Matlab software. The results of the spectral decomposition yielded frequency maps (slices) at data sampling interval (4ms) over the reservoir window. The maps revealed sub-seismic faults, differences in lithology and better reservoir delimitation. The results gave enhanced structural disposition of the reservoir bed and more detailed indication of the variation of reservoir character with depth. Keywords: Fourier transform, Spectral decompositio

    Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution

    Full text link
    Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.Comment: This work is accepted in CVPR 2017. The code and datasets are available on http://vllab.ucmerced.edu/wlai24/LapSRN
    • …
    corecore