1,232 research outputs found

    Outlier correction in image sequences for the affine camera

    Full text link

    Learning-based Image Enhancement for Visual Odometry in Challenging HDR Environments

    Full text link
    One of the main open challenges in visual odometry (VO) is the robustness to difficult illumination conditions or high dynamic range (HDR) environments. The main difficulties in these situations come from both the limitations of the sensors and the inability to perform a successful tracking of interest points because of the bold assumptions in VO, such as brightness constancy. We address this problem from a deep learning perspective, for which we first fine-tune a Deep Neural Network (DNN) with the purpose of obtaining enhanced representations of the sequences for VO. Then, we demonstrate how the insertion of Long Short Term Memory (LSTM) allows us to obtain temporally consistent sequences, as the estimation depends on previous states. However, the use of very deep networks does not allow the insertion into a real-time VO framework; therefore, we also propose a Convolutional Neural Network (CNN) of reduced size capable of performing faster. Finally, we validate the enhanced representations by evaluating the sequences produced by the two architectures in several state-of-art VO algorithms, such as ORB-SLAM and DSO

    How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change

    Full text link
    Direct visual localization has recently enjoyed a resurgence in popularity with the increasing availability of cheap mobile computing power. The competitive accuracy and robustness of these algorithms compared to state-of-the-art feature-based methods, as well as their natural ability to yield dense maps, makes them an appealing choice for a variety of mobile robotics applications. However, direct methods remain brittle in the face of appearance change due to their underlying assumption of photometric consistency, which is commonly violated in practice. In this paper, we propose to mitigate this problem by training deep convolutional encoder-decoder models to transform images of a scene such that they correspond to a previously-seen canonical appearance. We validate our method in multiple environments and illumination conditions using high-fidelity synthetic RGB-D datasets, and integrate the trained models into a direct visual localization pipeline, yielding improvements in visual odometry (VO) accuracy through time-varying illumination conditions, as well as improved metric relocalization performance under illumination change, where conventional methods normally fail. We further provide a preliminary investigation of transfer learning from synthetic to real environments in a localization context. An open-source implementation of our method using PyTorch is available at https://github.com/utiasSTARS/cat-net.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane, Australia, May 21-25, 201

    A study on local photometric models and their application to robust tracking

    Get PDF
    International audienceSince modeling reflections in image processing is a difficult task, most com- puter vision algorithms assume that objects are Lambertian and that no lighting change occurs. Some photometric models can partly answer this issue by assuming that the lighting changes are the same at each point of a small window of interest. Through a study based on specular reflection models, we explicit the assumptions on which these models are implicitly based and the situations in which they could fail. This paper proposes two photometric models, which compensate for spec- ular highlights and lighting variations. They assume that photometric changes vary smoothly on the window of interest. Contrary to classical models, the characteristics of the object surface and the lighting changes can vary in the area being observed. First, we study the validity of these models with re- spect to the acquisition setup: relative locations between the light source, the sensor and the object as well as the roughness of the surface. Then, these models are used to improve feature points tracking by simultaneously estimating the photometric and geometric changes. The proposed methods are compared to well-known tracking methods robust to affine photometric changes. Experimental results on specular objects demonstrate the robust- ness of our approaches to specular highlights and lighting changes

    Astrometric calibration and performance of the Dark Energy Camera

    Get PDF
    We characterize the ability of the Dark Energy Camera (DECam) to perform relative astrometry across its 500~Mpix, 3 deg^2 science field of view, and across 4 years of operation. This is done using internal comparisons of ~4x10^7 measurements of high-S/N stellar images obtained in repeat visits to fields of moderate stellar density, with the telescope dithered to move the sources around the array. An empirical astrometric model includes terms for: optical distortions; stray electric fields in the CCD detectors; chromatic terms in the instrumental and atmospheric optics; shifts in CCD relative positions of up to ~10 um when the DECam temperature cycles; and low-order distortions to each exposure from changes in atmospheric refraction and telescope alignment. Errors in this astrometric model are dominated by stochastic variations with typical amplitudes of 10-30 mas (in a 30 s exposure) and 5-10 arcmin coherence length, plausibly attributed to Kolmogorov-spectrum atmospheric turbulence. The size of these atmospheric distortions is not closely related to the seeing. Given an astrometric reference catalog at density ~0.7 arcmin^{-2}, e.g. from Gaia, the typical atmospheric distortions can be interpolated to 7 mas RMS accuracy (for 30 s exposures) with 1 arcmin coherence length for residual errors. Remaining detectable error contributors are 2-4 mas RMS from unmodelled stray electric fields in the devices, and another 2-4 mas RMS from focal plane shifts between camera thermal cycles. Thus the astrometric solution for a single DECam exposure is accurate to 3-6 mas (0.02 pixels, or 300 nm) on the focal plane, plus the stochastic atmospheric distortion.Comment: Submitted to PAS

    Robust Structure and Motion Recovery Based on Augmented Factorization

    Get PDF
    This paper proposes a new strategy to promote the robustness of structure from motion algorithm from uncalibrated video sequences. First, an augmented affine factorization algorithm is formulated to circumvent the difficulty in image registration with noise and outliers contaminated data. Then, an alternative weighted factorization scheme is designed to handle the missing data and measurement uncertainties in the tracking matrix. Finally, a robust strategy for structure and motion recovery is proposed to deal with outliers and large measurement noise. This paper makes the following main contributions: 1) An augmented factorization algorithm is proposed to circumvent the difficult image registration problem of previous affine factorization, and the approach is applicable to both rigid and nonrigid scenarios; 2) by employing the fact that image reprojection residuals are largely proportional to the error magnitude in the tracking data, a simple outliers detection approach is proposed; and 3) a robust factorization strategy is developed based on the distribution of the reprojection residuals. Furthermore, the proposed approach can be easily extended to nonrigid scenarios. Experiments using synthetic and real image data demonstrate the robustness and efficiency of the proposed approach over previous algorithms.22289016157335
    corecore