26,354 research outputs found

    Real-time architecture for robust motion estimation under varying illumination conditions

    Get PDF
    Motion estimation from image sequences is a complex problem which requires high computing resources and is highly affected by changes in the illumination conditions in most of the existing approaches. In this contribution we present a high performance system that deals with this limitation. Robustness to varying illumination conditions is achieved by a novel technique that combines a gradient-based optical flow method with a non-parametric image transformation based on the Rank transform. The paper describes this method and quantitatively evaluates its robustness to different illumination changing patterns. This technique has been successfully implemented in a real-time system using reconfigurable hardware. Our contribution presents the computing architecture, including the resources consumption and the obtained performance. The final system is a real-time device capable to computing motion sequences in real-time even in conditions with significant illumination changes. The robustness of the proposed system facilitates its use in multiple potential application fields.This work has been supported by the grants DEPROVI (DPI2004-07032), DRIVSCO (IST-016276-2) and TIC2007:”Plataforma Sw-Hw para sistemas de visión 3D en tiempo real”

    How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change

    Full text link
    Direct visual localization has recently enjoyed a resurgence in popularity with the increasing availability of cheap mobile computing power. The competitive accuracy and robustness of these algorithms compared to state-of-the-art feature-based methods, as well as their natural ability to yield dense maps, makes them an appealing choice for a variety of mobile robotics applications. However, direct methods remain brittle in the face of appearance change due to their underlying assumption of photometric consistency, which is commonly violated in practice. In this paper, we propose to mitigate this problem by training deep convolutional encoder-decoder models to transform images of a scene such that they correspond to a previously-seen canonical appearance. We validate our method in multiple environments and illumination conditions using high-fidelity synthetic RGB-D datasets, and integrate the trained models into a direct visual localization pipeline, yielding improvements in visual odometry (VO) accuracy through time-varying illumination conditions, as well as improved metric relocalization performance under illumination change, where conventional methods normally fail. We further provide a preliminary investigation of transfer learning from synthetic to real environments in a localization context. An open-source implementation of our method using PyTorch is available at https://github.com/utiasSTARS/cat-net.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane, Australia, May 21-25, 201

    DistancePPG: Robust non-contact vital signs monitoring using a camera

    Full text link
    Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios.Comment: 24 pages, 11 figure
    corecore