3,286 research outputs found

    Evaluation of CNN-based Single-Image Depth Estimation Methods

    Get PDF
    While an increasing interest in deep models for single-image depth estimation methods can be observed, established schemes for their evaluation are still limited. We propose a set of novel quality criteria, allowing for a more detailed analysis by focusing on specific characteristics of depth maps. In particular, we address the preservation of edges and planar regions, depth consistency, and absolute distance accuracy. In order to employ these metrics to evaluate and compare state-of-the-art single-image depth estimation approaches, we provide a new high-quality RGB-D dataset. We used a DSLR camera together with a laser scanner to acquire high-resolution images and highly accurate depth maps. Experimental results show the validity of our proposed evaluation protocol

    Keyframe-based monocular SLAM: design, survey, and future directions

    Get PDF
    Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery

    Planar laser-induced fluorescence imaging of OH in the exhaust of a bi-propellant thruster

    Get PDF
    Planar laser-induced fluorescence imaging of the hydroxyl radical has been performed on the flow produced by the exhaust of a subscale H2/O2 fueled bi-propellant rocket engine. Measurements were made to test the feasibility of OH (0,0) and (3,0) excitation strategies by using injection seeded XeCl and KrF excimer lasers, respectively. The flow is produced with hydrogen and oxygen reacting at a combustor chamber pressure of 5 atm which then exhausts to the ambient. The hydroxyl concentration in the exhaust flow is approximately 8 percent. Fluorescence images obtained by pumping the Q1(3) transition in the (0,0) band exhibited very high signals but also showed the effect of laser beam absorption. To obtain images when pumping the P1(8) transition in the (3,0) band it was necessary to use exceptionally fast imaging optics and unacceptably high intensifier gains. The result was single-shot images which displayed a signal-to-noise ratio of order unity or less when measured on a per pixel basis

    Video Registration in Egocentric Vision under Day and Night Illumination Changes

    Full text link
    With the spread of wearable devices and head mounted cameras, a wide range of application requiring precise user localization is now possible. In this paper we propose to treat the problem of obtaining the user position with respect to a known environment as a video registration problem. Video registration, i.e. the task of aligning an input video sequence to a pre-built 3D model, relies on a matching process of local keypoints extracted on the query sequence to a 3D point cloud. The overall registration performance is strictly tied to the actual quality of this 2D-3D matching, and can degrade if environmental conditions such as steep changes in lighting like the ones between day and night occur. To effectively register an egocentric video sequence under these conditions, we propose to tackle the source of the problem: the matching process. To overcome the shortcomings of standard matching techniques, we introduce a novel embedding space that allows us to obtain robust matches by jointly taking into account local descriptors, their spatial arrangement and their temporal robustness. The proposal is evaluated using unconstrained egocentric video sequences both in terms of matching quality and resulting registration performance using different 3D models of historical landmarks. The results show that the proposed method can outperform state of the art registration algorithms, in particular when dealing with the challenges of night and day sequences

    Domain-Size Pooling in Local Descriptors: DSP-SIFT

    Full text link
    We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training.Comment: Extended version of the CVPR 2015 paper. Technical Report UCLA CSD 14002

    Accurate Feature Extraction and Control Point Correction for Camera Calibration with a Mono-Plane Target

    Get PDF
    The paper addresses two problems related to 3D camera calibration using a single mono-plane calibration target with circular control marks. The first problem is how to compute accurately the locations of the features (ellipses) in images of the target. Since the structure of the control marks is known beforehand, we propose to use a shape-specific searching technique to find the optimal locations of the features. Our experiments have shown this technique generates more accurate feature locations than the state-of-the-art ellipse extraction methods. The second problem is how to refine the control mark locations with unknown manufacturing errors. We demonstrate in a case study, where the control marks are laser printed on a A4 paper, that the manufacturing errors of the control marks can be compensated to a good extent so that the remaining calibration errors are reduced significantly. 1

    Monitoring 3D vibrations in structures using high resolution blurred imagery

    Get PDF
    Photogrammetry has been used in the past to monitor the laboratory testing of civil engineering structures using multiple image based sensors. This has been successful, but detecting vibrations during dynamic structural tests has proved more challenging. Detecting vibrations during dynamic structural tests usually depend on high speed cameras, but these sensors often result in lower image resolutions and reduced accuracy. To overcome this limitation, a novel approach described in this paper has been devised to take measurements from blurred images in long-exposure photos. The motion of the structure is captured in individual motion-blurred image, without dependence on imaging speed. A bespoke algorithm then determines each measurement point’s motion. Using photogrammetric techniques, a model structure’s motion with respect to different excitation frequencies is captured and its vibration envelope recreated in 3D. The approach is tested and used to identify changes in the model’s vibration response
    • 

    corecore